Showing posts with label 22by7 Solutions. Show all posts
Showing posts with label 22by7 Solutions. Show all posts

Tuesday, July 21, 2009

What is Services Oriented Storage Solution?

Services Oriented Storage is a strategy that aligns storage resources with business needs. It's an approach that creates a flexible, adaptive storage platform. In the services oriented approach, technologies like storage virtualization and common management tools enable the deployment of Storage Services, such as disaster recovery, data classification, search, and archiving, across a multivendor storage environment. IT can deliver these services whenever and wherever requested, meeting user and application requirements for performance, scalability, availability, data security and affordability.

Why Services Oriented Storage Solutions?

Value Proposition

Services Oriented Storage can help businesses:

* Simplify storage management

* Improve service levels

* Increase storage utilization

* Lower costs

* Scale up and integrate easily, while protecting their investment

Monday, April 20, 2009

Proxy Caching_Wan Optimization

WAN optimization is one of the techniques to improve the speed of functional tools so as to maximize the business performance and to proactively solve the problems.

WAN optimization products seek to accelerate a broad range of applications accessed by distributed enterprise users via eliminating redundant transmissions, staging data in local caches, compressing and prioritizing data, and streamlining chatty protocols (e.g., CIFS).

Improving the application response time is the main benefit that is accrued out of WAN optimization and this is essential where there is centralization of servers and IT resources. When you optimize WAN, you are also saved from the cost of upgrading the band widths.

WAN Optimization is a superset of WAFS in that it also addresses:

Component techniques of WAN Optimization include WAFS, CIFS proxy, HTTPS Proxy, media multicasting, Web caching and bandwidth management.

A few WAN/Internet Optimization techniques:

Compression - Relies on data patterns that can be represented more efficiently. Best suited for point to point leased lines.

Protocol spoofing - Bundles multiple requests from chatty applications into one. Best suited for Point to Point WAN links.

Traffic shaping - Controls data usage based on spotting specific patterns in the data and allowing or disallowing specific traffic. Best suited for both point to point leased lines and Internet connections.

Equalizing - Makes assumptions on what needs immediate priority based on the data usage. Excellent choice for wide open unregulated Internet connections and clogged VPN tunnels.

Connection Limits - Prevents access gridlock in routers and access points due to denial of service or peer to peer. Best suited for wide open Internet access links , can also be used on WAN links.

Simple Rate Limits - Prevents one user from getting more than a fixed amount of data. Best suited as a stop gap first effort for a remedying a congested Internet connection or WAN link.

Proxy caching accelerates service requests by retrieving content saved from a previous request made by the same client or even other clients. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and cost, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. These machines are built to deliver superb file system performance (often with RAID and journaling) and also contain hot-rodded versions of TCP.

Wednesday, April 15, 2009

Beware of a disease called FORWARDITIS


As an organization that is into managing Information Security and Network Security, it is our duty to warn every Email user about the hazards involved in forwarding mails without cleaning them up.

Typically people who have the urge to share mails tend to hit the forward button without cleaning up the mails.

What we mean is, the email ids of all of the recipients and the mail ids within the mail itself get forwarded to the recipients.


The problem lying herein is that the scamsters capture all these Mail ids and sell it on the internet to other scamsters, Viagra dealers, marketers etc.


When a mail is sent with lots of mail IDs in the CC list, Internet Service Providers’ and the Mail Servers of organizations automatically send these mails to junk folders. And subsequently they have the option to Blacklist the sender.


Also it is very annoying for the receiver to keep scrolling down to reach the actual content.


If you need to send a mail to multiple recipients, then insert their Mail Ids in the BCC column. And remove all the mail Ids from the content, CC field.


Also when a mail arrives,typically you should get suspicious when:

· A mail/greeting comes with an attachment from an unknown sender.

· A mail which says you should forward it to as many people as you can in order to help a cause.

· A mail that asks for help ( Like the one about a long suffering girl who will get funds if you forward mails etc.)

· A mail that says URGENT, IMPORTANT.

· A mail that claims Authenticity by referring to agencies like CNN, BBC, NASA, FBI etc. without furnishing reference links.


If all this is confusing and if you want to find out if a message is true or not, just copy the subject line and Google it or better still, visit dedicated sites like www.snopes.com that maintain an updated database of such scams.


Alternatively, if you would like more technical help, get in touch with us.

Please do communicate and spread this message to your friends so that they too dont fall prey to the disease called Forwarditis.

Tuesday, December 9, 2008

Fretting about POWER COSTS... Help is here..-> LANDesk Power Manager



Don't fret and fume over those rising power costs....

Don't worry about Green I.T. initiative audits???

Want to know how... just click on the image and you will learn the solution implemented by Venky..

Tuesday, November 11, 2008

Worried About Handling those numerous network devices...Wish that you had a single window to control them... Hitch on to NAC


What is Network Access Control?
Network Access Control is a set of protocols used to define how to secure the network nodes prior to the nodes accessing the network.
It is also an approach to computer network security that attempts to unify endpoint security technology (such as antivirus, host intrusion prevention, and vulnerability assessment), user or system authentication and network security enforcement.
Network Access Control (NAC) aims to do exactly what the name implies: control access to a network with policies, including pre-admission endpoint security policy checks and post-admission controls over where users and devices can go on a network and what they can do.

Benefits of Network Access Control

Some of the key benefits are:
• Automatic remediation process i.e fixing non-compliant nodes before allowing access.
• Allowing the seamless integration of network infrastructure such as routers, switches, back office servers and end user computing equipment to ensure the information system is operating securely before interoperability is allowed.
• Mitigation of zero-day attacks
The key value proposition of NAC solutions is the ability to prevent end-stations that lack antivirus, patches, or host intrusion prevention software from accessing the network and placing other computers at risk of cross-contamination of network worms.
• Policy enforcement
NAC solutions allow network operators to define policies, such as the types of computers or roles of users allowed to access areas of the network, and enforce them in switches, routers, and network middleboxes.
• Identity and access management
Where conventional IP networks enforce access policies in terms of IP addresses, NAC environments attempt to do so based on authenticated user identities, at least for user end-stations such as laptops and desktop computers.

To know more on “The options available”, please contact Ms. Leela on (+91) 97314-00693.
Alternatively you could drop in a word at: leela@22by7.in , sigma@22by7.in and we would be glad to touch base with you.

Tuesday, October 28, 2008

Unified Storage... Simplified for YOU


What is Unified Storage?

Unified storage (sometimes termed network unified storage or NUS) is a storage system that makes it possible to run and manage files and applications from a single device. To this end, a unified storage system consolidates file-based and block-based access in a single storage platform and supports Fibre Channel SAN, IP-based SAN (iSCSI), and NAS (network attached storage).

How is it implemented in practice?

Unified storage is often implemented in a NAS platform that is modified to add block-mode support.

Benefits:

· Simultaneously enables storage of file data and handles the block-based I/O (input/output) of enterprise applications.

· Reduced hardware requirements – Instead of separate storage platforms, like NAS for file-based storage and a RAID disk array for block-based storage, unified storage combines both modes in a single device.

· Easy to administer since there is only a single device to be deployed.

· Lower Capital expenditures for the enterprise.

· Simpler to manage.

· Advanced features like storage snapshots and replication.

· Unified storage systems generally cost the same and enjoy the same level of reliability as dedicated file or block storage systems

Tuesday, September 30, 2008

Data loss is inevitable... R U Ready For It


No organization that depends on technology and stores data can afford to be without a Disaster Recovery strategy and a backup infrastructure.

Disaster recovery is the process, policies and procedures of restoring operations critical to the resumption of business, including regaining access to data (records, hardware, software, etc.), communications (incoming, outgoing, toll-free, fax, etc.), workspace, and other business processes after a natural or human-induced disaster.

A disaster recovery plan (DRP) should also include plans for coping with the unexpected or sudden loss of communications and/or key personnel. Disaster recovery planning is part of a larger process known as business continuity planning (BCP). With the rise of information technology and the reliance on business-critical information the importance of protecting irreplaceable data has become a business priority in recent years. Hence there is a need to backup your digital information to limit data loss and to aid data recovery.

Knowing what you need is half the battle and you can't know what you need until you have an understanding of what is critical. The basic question must be, "If the business has to run on a minimal set of applications and infrastructure, what would those applications and support systems be?"

Weigh the amount of risk you're willing to take against the kind of damage a disaster could do to business against the cost of varying levels of disaster readiness.

Accurate baseline information about your systems will get you on the road to Disaster Recovery. Once there, consider your options. If you have multiple, geographically diverse offices, consider having them back up one another. A little extra hardware and some form of disk-to-disk replication will set you up. Remember to budget time and resources for testing--when you need it is not the time to find out your data isn't replicating.

How long can you afford to be down? Get an idea, of the cost of downtime to cost of restoration. Don't forget to account for whether you have to restore from tape, or are willing to allow disk-to-disk backups to your provider. Disk-to-disk will make your recovery a lot faster than if you have to courier tapes from your tape storage location to the Disaster Recovery site.

Disaster Recovery Strategies
Mentioned below are a few of the most common strategies for data protection.
• Backups made to tape and sent off-site at regular intervals (preferably daily)
• Backups made to disk on-site and automatically copied to off-site disk, or made directly to off-site disk.
• Replication of data to an off-site location, which overcomes the need to restore the data (only the systems then need to be restored or synced). This generally makes use of Storage Area Network (SAN) technology
• High availability systems which keep both the data and system replicated off-site, enabling continuous access to systems and data.

Organizations must also implement precautionary measures, some of which are listed below with an objective of preventing a disaster situation in the first place:

• Local mirrors of systems and/or data and use of disk protection technology such as RAID
• Surge Protectors — to minimize the effect of power surges on delicate electronic equipment
• Uninterruptible Power Supply (UPS) and/or Backup Generator to keep systems going in the event of a power failure
• Fire Preventions — more alarms, accessible fire extinguishers
• Anti-virus software and other security measures

References
1. Buchanan, Sally. "Emergency preparedness." from Paul Banks and Roberta Pilette. Preservation Issues and Planning. Chicago: American Library Association, 2000. 159-165. ISBN 978-0-8389-0776-4
2. Hoffer, Jim. "Backing Up Business - Industry Trend or Event." Health Management Technology, Jan 2001

Tuesday, September 23, 2008

Handle Daily Internet Usage cost effectively... with Link Balancers.

Link Balancing commonly referred to as dual WAN routing or Multihoming, Network Load Balancing is the ability to balance traffic across two WAN links without using complex routing protocols.

A Link Balancer is an affordable and powerful solution for routing and managing traffic across multiple Internet connections. Designed to scale for high bandwidth requirements and provide business continuity for an organization of any size, it optimizes the use of multiple Internet links, such as T1s, T3s, DSL and cable connections from one or multiple Internet service providers. Capable of automatic failover in the event of link failure, the Link Balancer helps assure that your network is always connected to the Internet.

This capability balances network sessions like web, email, etc over multiple connections in order to spread out the amount of bandwidth used by each LAN user, thus increasing the total amount of bandwidth available. Example: A user has a single WAN connection to the Internet operating at 1.5Mbit/s. They wish to add a second broadband (Cable, DSL, Wireless) connection operating at 2.5Mbit/s. This would provide them with a total of 4Mbit/s of bandwidth when balancing sessions.

Advantages to your Business/Organization:



• Aggregates Internet connection links
• Automated failover
• Bandwidth management
• Quality of Service (QoS) for Internet applications
• Traditional firewall
• Reduces the need to purchase multiple high speed and high cost links to handle the daily Internet usage.
• Provides Network Redundancy.

References: Wikipedia
Barracuda Networks Inc.

Image Reference: www.searchsecurity.de

Thursday, September 18, 2008

Outstanding Performance - Technical Support



Mr.Amarnath Reddy,Senior Engineer - Security Solutions has been awarded with the "Outstanding Performance - Technical Support" National Award from FORTINET INC for the year 2007-08.

Mr Vinay M.S-Principal Architect-Security Solutions is seen receiving the award at Kuala Lumpur, Malaysia on his behalf.

Friday, August 29, 2008

Walking the Network Tight Rope made easier... With Load Balancers


Load Balancing is defined as a process and technology that distributes site traffic among several servers using a network based device. This device intercepts traffic destined for a site and redirects that traffic to various servers.
It is a technique to spread work between two or more computers, network links, CPUs, hard drives, or other resource. in order to get optimal resource utilization, throughput, or response time. Using multiple components with load balancing, instead of a single component, may increase reliability through redundancy. The balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch). It is commonly used to mediate internal communications in computer clusters, especially high-availability clusters. This process is completely transparent to the end user.

Benefits of Load Balancing:

- Optimal resource utilization
- Better throughput and response time
- Increases reliability through redundancy
- Streamlining of data communication
- Ensures a response to every request
- Reduces dropping of requests and data.
- Offers content aware distribution, by doing things such as reading URLS, intercepting cookies and XML parsing.
- Maintains a watch on the servers and ensures that they respond to the traffic. If they are not responding, then it takes them out of rotation.
- Priority activation: When the number of available servers drop below a certain number, or load gets too high, standby servers can be brought online.
- SSL Offload and Acceleration reduces the burden on the Web Servers and performance will not degrade for the end users.
- Distributed Denial of Service (DDoS) attack protection through features such as SYN cookies and delayed-binding to mitigate SYN flood attacks and generally offload work from the servers to a more efficient platform.
- HTTP compression: reduces amount of data to be transferred for HTTP objects by utilizing gzip compression available in all modern web browsers.
- TCP buffering: the load balancer can buffer responses from the server and spoon-feed the data out to slow clients, allowing the server to move on to other tasks.
- HTTP caching: the load balancer can store static content so that some requests can be handled without contacting the web servers.
- Content Filtering: some load balancers can arbitrarily modify traffic on the way through.
- HTTP security: some load balancers can hide HTTP error pages, remove server identification headers from HTTP responses, and encrypt cookies so end users can't manipulate them.
- Priority queuing: also known as rate shaping, the ability to give different priority to different traffic.
- Client authentication: authenticate users against a variety of authentication sources before allowing them access to a website.
- Firewall: Direct connections to backend servers are prevented, for security reasons

References: Server Load Balancing by Tony Bourke
Wikipedia

Image Reference: http://images.newsfactor.com/images/id/4443/story-data-012.jpg

Monday, August 25, 2008

Keep unwanted mail away with Email filtering


Email filtering is the processing of e-mail to organize it, according to specified criteria. Most often this refers to the automatic processing of incoming messages, but the term also applies to the intervention of human Intelligence in addition to anti-spam techniques, and to outgoing emails as well as those being received.

Email filtering software takes emails as input. For its output, it might pass the message (though unchanged) for delivery to the user's mailbox, redirect the message for delivery elsewhere, or even throw the message away. Some mail filters are able to edit messages during processing.

Common uses for mail filters include removal of spam and of computer viruses. A less common use is to inspect outgoing e-mail at some companies to ensure that employees comply with appropriate laws. Users might also employ a mail filter to prioritize messages, and to sort them into folders based on subject matter or other criteria.

Advantages:

1. Defend against inbound threats

2. Prevent data leakage through emails

3. Encrypt sensitive information

4. Help in analyzing messaging infrastructure.


References: Wikipedia
Inputs from Gerry Tucker. Director- Sales, APAC Proofpoint systems

Monday, August 18, 2008

Increase Productivity.... Implement a SSL VPN



What is a SSL-VPN?

SSL-VPN stands for Secure Socket Layer Virtual Private Network. It is a term used to refer to any device that is capable of creating a semi permanent encrypted tunnel over the public network between two private machines or networks to pass non-protocol specific, or arbitrary traffic. This tunnel can carry all forms of traffic between these two machines meaning it is encrypting on a link basis, not on a per application basis.

It is a mechanism provided to communicate securely between two points with an insecure network in between them.

Benefits of using SSL VPN:

· Improves work force productivity since Employees and contractors can perform tasks even when not physically present in their usual work facilities.

· Easy deployment since it does not require any special client software to be installed.

· Provides more security options.

· Improved manageability due to highly configurable access control capabilities, health checks etc.

· Lowers costs because of the Increased self-service capabilities for conducting business with outside parties such as suppliers and customers. Employees can work remotely on a regular basis (e.g., IT consulting) thereby allowing the organization to maintain less office space (and save money).

· Increased self-service capabilities for suppliers improve their efficiency, yielding better-negotiated service/product rates.

· If remote access is used as part of business-continuity strategy, fewer seats may be necessary at disaster-recovery/business-continuity facilities than if all workers must work at the secondary site.

References: http://www.sans.org/reading_room/whitepapers/vpns/1459.php
http://sslvpnbook.packtpub.com/chapter6.htm

Thursday, August 14, 2008

Identify ME!! Securing Your Future with Two- Three Factor Authentication



What is Authentication?

Authentication (from Greek αυθεντικός; real or genuine, from authentes; author) is the act of establishing or confirming something (or someone) as authentic, that is, that claims made by or about the thing are true. This might involve confirming the identity of a person or assuring that a computer program is a trusted one.

What is an Authentication Factor?
An authentication factor is a piece of information and process used to authenticate or verify a person's identity for security purposes.

What is Transactional Authentication?
Transaction authentication generally refers to the Internet-based security method of securely identifying a user through two or three factor authentication at a transaction level, rather than at the traditional Session or Logon level.

Types of Factor Authentications:

1. Two Factor Authentication: Two-factor authentication is a security process in which the user provides two means of identification, one of which is typically a physical token, such as a card, and the other of which is typically something memorized, such as a security code. In this context, the two factors involved are sometimes spoken of as something you have and something you know. A common example of two-factor authentication is a bank card: the card itself is the physical item and the personal identification number (PIN) is the data that goes with it.

2. Three Factor Authentication: is a security process in which
the user has to provide the following three means of identification:
• Something the user has (e.g., ID card, security token, software token)
• Something the user knows (e.g., a password, pass phrase, or personal identification number (PIN))
• Something the user is or does (e.g., fingerprint or retinal pattern, DNA sequence, signature or voice recognition, unique bio-electric signals, or any other biometric identifier)

A few examples of the factors that could be used as SOMETHING THE USER HAS:

Tokens: The most common forms of the 'something you have' are smart cards and USB tokens. Differences between the smart card and USB token are diminishing; both technologies include a microcontroller, an OS, a security application, and a secured storage area.
Biometrics: Vendors are beginning to add biometric readers on the devices, thereby providing multi-factor authentication. Users biometrically authenticate via their fingerprint to the smart card or token and then enter a PIN or password in order to open the credential vault.
Phones: A new category of T-FA tools transforms the PC user's mobile phone into a token device using SMS messaging or an interactive telephone call. Since the user now communicates over two channels, the mobile phone becomes a two-factor, two-channel authentication mechanism.
Smart cards
Smart cards are about the same size as a credit card and perform both the function of a proximity card and network authentication. Users can authenticate into the building via proximity detection and then insert the card into their PC to produce network logon credentials. They can also serve as ID badges.
Universal Serial Bus
A USB token has different form factor; it can't fit in a wallet, but can easily be attached to a key ring. A USB port is standard equipment on today's computers, and USB tokens generally have a much larger storage capacity for logon credentials than smart cards.
OTP Token: Some manufacturers also offer a One Time Password (OTP) token. These have an LCD screen which displays a pseudo-random number consisting of 6 or more alphanumeric characters (sometimes numbers, sometimes combinations of letters and numbers, depending upon vendor and model). This pseudo-random number changes at pre-determined intervals, usually every 60 seconds, but they can also change at other time intervals or after a user event, such as the user pushing a button on the token. Tokens that change after a pre-determined time are called time-based, and tokens that require a user event are referred to as sequence-based (since the interval value is the current sequence number of the user events, i.e. 1, 2, 3, 4, etc.). When this pseudo-random number is combined with a PIN or password, the resulting pass code is considered two factors of authentication (something you know with the PIN/password, and something you have from the OTP token). There are also hybrid-tokens that provide a combination of the capabilities of smartcards, USB tokens, and OTP tokens.

Advantages Of using 2/3 Factor Authentication:
1. Drastically reduce the incidence of online Identity Thefts, phishing expeditions and other online frauds.
2. Ensures that you have a very strong authentication method in place.
3. Increases the confidence and trust levels of the users interacting with your network.
4. Adheres to the compliance rules of various standards especially if you are in the financial domain.
5. Ensures that you have sufficient levels of security to thwart any attacks on your network.
6. It allows you to provide secure remote access to your network.

Reference: Wikipedia.
Image Source: www.koshatech.com/images/solutions_img.jpg
www.info.gov.hk/.../images/2_factors.jpg

Tuesday, August 5, 2008

Keeping Away the Peeping Toms...With Mail Encryption



KEEP AWAY PEEPING TOMS… WITH EMAIL ENCRYPTION.
If you are mailing a Cheque/DD to somebody or a very important document to a family member or to your customer, do you send it by ordinary post? NO, in all probability you would either send it by courier or by registered post to ensure that the packet reaches the hands of the right and intended person only. Moreover, you will ensure that the envelope holding these items is not transparent or easily tamperable. This will help you to obfuscate or hide the contents even better. To ensure that it has been received by the intended person, you ask for an acknowledgement, the date when the delivery has taken place etc.

Why then would you send personal or confidential information in an unprotected email?

Why do I need to encrypt my emails?

Sending information in an unencrypted email is the equivalent of sending a cheque/DD in an unsealed envelope or writing confidential information on a postcard for all to see. This will allow anybody and everybody to take advantage of such information and use it to defraud us. We are all sure that none of us would like to encounter such a situation.

While in transit, e-mail messages are sent through one or more mail transfer agent servers until it reaches the destination e-mail server. Someone with access to this server can easily intercept and read the e-mail message. In addition, e-mail messages that travel through these mail transfer agent (mta) servers are very likely stored and backed up even after delivery to the recipient, and even if the recipient and the sender have deleted their copies of the message. This stored copy of the e-mail may be subject to snooping in the future, and persist indefinitely.
Additionally, the internet makes it easy to “spoof” the sender field of an email message, allowing nefarious individuals to misrepresent their identities. This has led to a phenomenon known as “phishing” and other forms of attacks over e-mail, underscoring the importance of the recipient being able to reasonably authenticate the sender's identity. That is the reason why we need to ENCRYPT OUR MAILS.

Techniques used to encrypt emails:
1. Symmetric Crypts: both recipient and sender share a common key or password that is used to decrypt/encrypt the message.

2. Asymmetric Crypts: here there are two keys used. One is a public key that can be shared with everyone and to encrypt the message. The other is the private or secret key known only to the recipient and used to decrypt the message. Both the keys are required in a transaction here.

E-mail encryption design approaches

1. The Client-Based Method suggests that the sender of the email should be responsible for e- encryption.

2. The Gateway-based Method suggests that the organization is responsible for e-mail security, and encryption should be performed on a server operating on the corporate network, based on the security and regulatory compliance needs of the company and its industry vertical.

Methods of Message Retrieval
1. The “in box” delivery model: the encrypted e-mail is delivered to the user’s email inbox, where they can open the encrypted message after providing the appropriate password or credentials.

2. The “mail Box” model: the user receives an e-mail with a hyperlink to the encrypted message. The user then follows the hyperlink to arrive at a website where they submit their credentials and are then able to view the decrypted message.

Standard approaches to e-mail encryption
The need for e-mail encryption has lead to a variety of solutions – some from standards bodies, and some from the marketplace. Below are a few of these approaches:

1. S/MIME : S/MIME (Secure/Multipurpose Internet Mail Extensions) is a standard for public key encryption signing e-mail. S/MIME was developed by RSA Data Security, Inc. S/MIME provides the cryptographic security services for authentication, message integrity, and non-repudiation by combining a digital signature with encryption. Before S/MIME can be used in an application, the user must obtain and successfully install a unique key/certificate from a Certificate Authority (CA) or from a public CA. Encryption requires storing the destination party's certificate, a process that is typically automated when receiving a message from the party with a valid signing certificate attached.
2. PGP and OpenPGP: Pretty Good Privacy (PGP) is a standard that delivers cryptographic privacy authentication. The first version of PGP, by designer and developer Phil Zimmermann, was released as an open standard. Zimmermann and others have developed subsequent versions. Eventually, the PGP secure e-mail offering was adopted as an Internet standards-track specification known as OpenPGP. OpenPGP is now an open standard with PGP. PGP and OpenPGP require a client or plug-in. PGP uses both public-key cryptography and symmetric key cryptography.
3. PostX Registered Envelope Encryption and Security: The PostX Registered Envelope is a secure delivery model for PostX Envelope. The Registered Envelope uses online authentication for decryption key retrieval to provide secure auditable message delivery. The Registered Envelope delivers both the encrypted payload and necessary decryption code via an e-mail attachment to the recipient. E-mail payload is encrypted with a unique (per message) secure random session key. The session key is stored in the PostX KeyServer (and is not sent with the message itself).
4. Identity-Based Encryption: In the 1980’s, identity based encryption (IBE) methods were developed for e-mail by RSA and others to communicate securely in ad hoc environments. In this model, the e-mail address of the recipient is used to perform the e-mail encryption. In order to provide the strength of a password or authentication, identity-based encryption requires client software.
5. Pull solution: In this model, the recipient is pulled into a secure message inbox. In this inbox, the recipient can perform all of the e-mail functions in a branded environment.

Advantages:

1. Encrypting your email will keep all but the most dedicated hackers from intercepting and reading your private communications.

2. Using a personal email certificate, you can digitally sign your email so that recipients can verify that its really from you as well as encrypt your messages so that only the intended recipients can view it. This will help stem the tide of spam and malware being distributed in your name.

3. When your contacts receive an unsigned message with your email id spoofed, they will realize that its not from you and will delete it.

4. Protect your integrity and confidentiality.

5. It will also help you to adhere to the compliance rules of most standards.



Reference: Wikipedia, About.com.

Image Source: http://images.teamsugar.com/

Friday, July 25, 2008

PC LCM----> Your Virtual Assistant…

Are you a Sys Admin perplexed and tired of running around maintaining your systems…
Are you worried that the right patches have not been downloaded and installed….
Are you not aware of which licenses are about to expire and when?

Do not Despair… Just adopt a PC Life Cycle Management Solution and put all your fears to rest… Relax and Relish your work…

Properly managing an IT environment requires expertise and often takes significant amounts of time and effort. System administrators are responsible for providing their organizations with access to critical applications and services while ensuring that systems perform optimally and remain secure. Unfortunately, keeping pace with the frequency of changes in an IT environment of any size can be a daunting task. The problem is often more pronounced in midsize organizations, where system administrators are forced to juggle many responsibilities.

Issues such as deployment, monitoring, and updating computers can have a significant impact on organizations’ budgets (not to mention system administrators’ blood pressure).

While it might be possible to complete some tasks manually, the number of devices and applications that must be managed can quickly become overwhelming. Add in priorities unrelated to desktop lifecycle management, and help is sure to be welcome. The best solution for managing IT environments that are increasing in size and complexity is through automation.

It is here that PC Life Cycle Management Solutions step in and help you mitigate all the associated complications.

A typical PC Life Cycle Management Solution will do the following for you:

• Sophisticated MSI packaging
• Unattended remote client resets
• Comprehensive inventory-based distribution
• Global scheduling of jobs and executing them
• Intelligent multicast replication
• Complete system repair

• Drag and drop configuration management
• Backup/restore of user personality and locally saved data from a single PC
• Centralized reporting functions
• Native integration with the Directory services.
• Patch management to distribute patches and virus updates
• Bandwidth throttling
• Mandatory (push) and software request (pull) distribution
• Wake on LAN
• OS deployment
• A single Management console to manage all your devices
• Security Management
• Define process workflows to dynamically manage the devices on the network right from purchase to retirement

How does my organization and I benefit if we opt for a PC LCM?
• Reduces I.T. Labour and Asset ownership costs
• Adherence to both internal and external compliance standards.
• Consistent User Experience
• Centralized and Single Management Console reduces the strain on the Sys Admin
• Know what you have in your network and where in a jiffy.
• Up to date and current information about the health of your devices allows you to undertake preventive measures.
• Streamline the existing process and ensure that there is a common policy to handle unforeseen circumstances.
• Automatically update, deploy and manage the software on the clients.

References:
1. www.Microsoft.com : White paper on PC Life Cycle Management
2. www.Pactech.net/wininstall