Saturday, September 11, 2010

Tips for taking your contact centre into the cloud.

Business Week claimed in 2008 that ‘cloud computing is changing the world’.  In the same year, commentators quoted on CNET.com suggested cloud computing represented a ‘paradigm shift’ and was ‘the new black’.  Hyperbole surrounds this subject.

Cutting through the hype is vital if enterprise level cloud services are to be brought to bear on the contact centre.  Most people use the cloud to mean cloud computing.  In fact, computing power is simply one service, albeit one of the most common, that can be delivered via the cloud.  As a rule of thumb, cloud delivery means a service is pay-as-you-use, scalable, based on shorter contract terms – hours rather than years and consumed via a web portal.

A cloud-delivered contact centre meets these criteria, combining hosted IP telephony and automated, voice-activated software-as-a-service to deliver a package that is deeply scalable and can be purchased in new and flexible ways, such as per concurrent agent and by the hour.  This new level of cost granularity will allow chief operating officers and heads of customer service to measure more closely than ever the efficiency of their contact centre infrastructure, unlocking service improvements and additional cost savings in the future.

The cost efficiency of a contact centre is based largely on how successfully it can be operated at or near its capacity.  This is the key advantage of a cloud-based contact centre.  For example, imagine a travel company taking hotel bookings for the 2010 World Cup in South Africa.  The company has invested in contact centre infrastructure sufficient to handle peak demand in a normal month.  As the World Cup approaches, peak times may comprise twice as many incoming calls as their contact centre has been manned to cope with.

But imagine now the company employs an automated service that presents the customer with choices about the hotel they want, the room type, length of stay, whether breakfast is included or not, and so on.  Critically, the service is based on applications hosted on virtualised servers in the cloud.  If the travel company is using software-as-a-service in this way, scaling up services for only a few hours is suddenly feasible, as it no longer requires the company to have free server space itself, or trained customer service agents waiting on hand to process the calls.

In 2010, finding the right technology partner to move contact centres into the cloud, and the right commercial model to buy those services, will be vital.  This year, in particular, purchasing decisions will be under scrutiny by finance departments, regulators and investors.  What advice, then, can be given to organisations planning to negotiate increasingly complex and nuanced technology and payment models?  Here are BT's tips – ‘ten for 2010’:

1)            Don’t underestimate ‘self-service’ technologies.  They can do more than you think and customers don’t dislike them as much as they might say.  Two thirds (60 per cent) of US and UK customers would rather use a voice-based self-service system than an offshore contact centre to improve service efficiency.

2)            Voice-based self service can also improve efficiencies and stop avoidable manned contacts – two vital outcomes in the current climate.  Remember that with self-service and virtualised options, you can deliver 24/7 customer service without employing a 24/7 rota of agents.

3)            Hosted services help you cope.  The advantages of hosted services are not just the obvious financial ones such as the avoidance of capital expenditure. One real benefit is the ability to scale.  If your demand fluctuates by 40 or 50 per cent, that means for much of the time you are paying for twice as much capacity as you need. Hosting services in the network means you pay only for what you use – and they can simply be scaled by a factor of 10 or 100 if that is required.

4)            Take the call to the agent not the agent to the call. Save money and the environment by using cloud-based contact centres, routing calls to agents, instead of having your agents go to a place to receive calls.  It saves money on facilities, increases the pool of skilled workers available, increases their productive hours and allows resources to be flexed as needed.

5)            Read up on virtualisation!  Virtualising services is happening all over the place, right now.  It will provide agent savings (of up to 15 per cent, according to BT estimates) while improving the average answer time, thereby improving customer satisfaction.

Hosted contact centres, delivered from the network, are the quickest, most cost-efficient way to virtualise resources.  This can improve service quality by connecting the best person with the right skill to the right enquiry every time.  Moreover, costs can be optimised, by maximising the use of expensive skills and managing skills centrally as one big ‘pool’.  Any agent can be available to any call, just by connecting to the hosted platform.

6)            Choose non-geographic telephone numbers.  That way, you are not tying your organisation to a specific location.  The number can move with you as you grow without you having to change your main contact numbers. Your contact number can even become part of your brand.

7)            Consider looking beyond call recording as a ‘tick in the box’ for industry regulation.  Instead, view it as an aide to understanding your customers and improving the quality of agent interaction.  Let automated analytics loose on your archive of recorded calls and learn things you never knew about your customers, your agents, your IT infrastructure and your products and services.

8)            As the delivery of customer contact services becomes increasingly technological, organisations must think carefully about what is mission-critical.  Is operating contact centre infrastructure a central aspect of the organisation?  Concentrating on core business competences is likely to meet the approval of your shareholders and investors, and letting a specialist provider operate your contact centre infrastructure takes away the problems of supplier management and service support.

9)            Protect yourself against rapid change.  Innovations such as cloud services, virtualisation and unified communications are likely to bring about a step change in the way customer contact is delivered and priced.  The best way to remain at the leading edge of this change is to work with a partner focused on delivering these innovations with the stability and security required by international enterprises and Government bodies.

10)        Pick a partner that understands the full range of contact centre technologies – from network management and telephony, to cloud delivery of applications and services.

One of the features of 2010 will be the increasingly innovative use of cloud services, and contact centres are no exception to this trend.  For heads of customer service and chief operating officers, the focus this year is to balance exemplary customer service and cost containment.  They may find the cloud – without the hype too often associated with it – is the way to achieve both.

Labels:

Wednesday, September 08, 2010

Assessing your IT infrastructure for desktop virtualization.


An increasing number of enterprises are using desktop virtualization to reduce their cost of supporting PCs. But while a virtual desktop infrastructure provides the benefits of centralized applications, it also changes how individual users are supported. If your infrastructure is not suited for the VDI model, performance and stability issues can be profound -- and potentially disastrous.

In a virtual desktop infrastructure (VDI), applications run on a server virtual machine (VM) and are linked to the user via a desktop client app. The actual application execution is remote, and PC storage, memory, and CPU access are virtualized and hosted. The master virtual PC can be projected to any suitable client device, but only the user interface is projected. Since the desktop instances are hosted, they can be affected by the same problems as server virtualization applications -- and more.

The big challenge when planning for VDI hosting is the sheer number of virtual desktops. Most companies using server virtualization run two to five virtual servers per actual server: A large enterprise might host 2,000 to 3,000 virtual servers. However, that same company could have 10,000 or more desktop systems to virtualize. Predicting how all those virtual desktops will use the data center resource pool is a challenge.

Instead of treating PCs as discrete systems with their own operating systems and middleware, software and storage, virtual desktop technology lets enterprises create machine images of various "classes" of systems and load those images on demand. In some cases, users can customize the configuration of the master image in the same way they would customize the configuration of a real system. But customization means more desktop master images to manage and changes to application requirements can make an old master incompatible with a worker's current usage.

In terms of resources, memory may be the toughest VDI issue to manage. Unlike server application components that can have short persistence -- particularly in service-oriented architecture software -- desktop applications are designed to load and stay running for hours: They must be paged out to be removed from memory. That paging can create non-sustainable disk I/O loading. Even if a given set of users run the same basic application, in most cases, they can't run the same exact copy. Therefore, a large memory pool that can hold as many discrete machine images as possible is essential.

Disk storage is another challenge with hosted VDI applications. On real distributed desktops, client system disk usage is supported on different devices and controllers, and thus, it would never collide. When desktops are virtualized and hosted, the host systems has to field disk I/O for all of the virtual desktops at the same time, which can create congestion and performance problems, particularly if work schedules can produce frequent synchronized behaviours. If every user starts his/her day by reviewing a task list, the 9 a.m. I/O impact can be profound. Therefore, it's critical to have very efficient I/O and storage systems on all VDI hosts.

Affordable solid-state drives are an advance that impacts both memory and storage. Solid-state disks and effective multilayer managed caching of machine images and paging can reduce the memory requirements for a given level of application performance.

Multicourse server technology is also an enhancement to VDI support. Remember the total CPU power of 10,000 desktops was available to support the hypothetical enterprise in a standard client/server mode. Compressing those desktops into a set of VM resources is more likely to succeed if every server has several cores where application needs can be allocated to. Otherwise, a collision of activity could reduce performance to near-zero levels for all.

The biggest infrastructure challenge for the hosted virtual desktop model is sustaining the performance of the server-to-user connection. Unlike client-server computing, which exchanges basic data elements between the desktop and the server farm, virtual desktop computing must provide a remote display and keyboard interface that can be significantly more bandwidth intensive. Since the performance of the communication's connection is critical to user satisfaction, VDI management plans have to take the capacity of this link into account. When the desktop and server are in the same physical facility, only LAN capacity is consumed, and companies can improve virtual desktop performance by increasing the speed of their LAN connections (both to the user and between LAN switches). Enterprises can also flatten their LAN infrastructures to reduce the number of LAN switches between real desktop users and virtual desktop host systems.

Many companies are now considering or deploying virtualization and cloud computing, and in the process, they're refining their data center networking. This is a good time to consider and address the network impact on VDI performance. Flattening the data center and headquarters network improves virtualization and cloud computing performance in the data center as well as VDI application performance.

In cases where VDI supports remote workers, performance will normally be linked to the capacity of the remote access connection. The explosion in consumer broadband has made "business Internet" services with access speeds of 10, 20, 50 or even 100 Mbps available at reasonable costs. Using a VPN with such a service may be the best way to ensure good application performance for remote VDI users.

VDI technology is justified by operational savings, but those savings cannot be realized if business operations are disrupted by performance issues. Invest in adequate VDI resources upfront to properly support your enterprise. As always, conduct a limited-scope pilot test to verify the conclusions of an infrastructure assessment. With careful planning, a VDI project can significantly reduce current costs and contain further cost increases associated with growing PC support demands.

Labels:

Virtual server training guidelines for your IT team.


Server virtualization is a common deployment in Indian companies. Commercial houses want to avoid physical servers, and paying through the nose for power and cooling, notwithstanding the ever-increasing real-estate prices. While going about virtual server training, you can start with a set of systems lying around. Pre-install it with a virtualization software and test. This should be done purely from the learning and testing perspective and not from the production system's view. Once you have hands-on experience on the software, you can work on the implementation.

The next step is to install server virtualization software. You have a wide choice of vendors, including VMware, Microsoft, and Citrix. Whenever you are going in for a virtual server in your organization, the choice of software should be solution-oriented, not product-oriented.

Find out what your company requirements are. Choose the software based on your requirements and costs. The best way out is to have an IT policy in place, as there will be no discipline without this. Ensure that whenever you are going about server virtualization, there is a blue-print available in front of you. This will help you to know the various tasks and ensure that a certain task 'A' should necessarily be completed to satisfy task 'B'.

After installing the servers, have a set of pilot testers who access them within the network. Virtual server training should be within a private domain where they can do the testing and then these pilot testers give feedback. For maintaining the new infrastructure your IT team can get an IT implementer on board. In addition, there are product-based websites on the Internet that you can refer.

One of the problems with server virtualization is storage. With multiple workloads coming from the physical source, hiccups over random input-output are possible. Treat the virtualized server as a real physical server. Whenever storage or network bandwidth or any kind of operating system resources are being assigned by a particular virtualized server that contain all the virtual machines being accessed by the client, the administrators of those servers must keep in mind that a virtual server is only limited to the capacity which is there on the physical server. So, it shouldn't be over-burdened.

Ensure that the disk resource allocation (which happens when the server has a huge hard disk) is shared network attached storage. This will enable all these virtual machines to communicate with each other from a common storage, saving multiple hard drives on each virtual machine-dedicated set.

Labels:

BIAL’s (Bangalore International Airport Limited) disaster recovery implementation streamlines airport operations.



Bangalore International Airport Limited (BIAL), known for its smooth operations powered by IT, is a prime example of IT infrastructure set up and management. Reputed to be one of the best operated airports in the world, BIAL has now incorporated its disaster recovery implementation which has a key role to play in the airport’s business.  

BIAL has its primary data center and disaster recovery (DR) site at the airport itself. There are about 60 servers of a heterogeneous nature (varying from Windows to Unix to Linux) that host different applications. These servers are spread across two data centers. The servers are mostly HP Proliant, HP 9000, HP Integrity, Dell, and blade servers from IBM.


The disaster recovery implementation’s reach


The Bengaluru airport’s disaster recovery implementation is hosted in a different building, but within the same campus. As U Nedunchezhiyan, the senior manager of ICT (Infrastructure) at Bengaluru International Airport points out, “Disaster recovery implementation as a business requirement is such that we cannot have the DR site far from the airport. If there is an issue at the Bengaluru airport and the DR site is far away, it is not possible to immediately start it.”

The data center has standard BSES power and UPS backup (with N+1 redundancy and regular battery backups). In terms of connectivity, telecom providers have terminated fiber connectivity at BIAL’s fiber head over-point. From this point it goes to BIAL’s telecom center and then to the users. Connectivity is not available directly between the users and telecom service providers. Apart from having its own infrastructure, BIAL also acts as service provider to many tenants within the airport. BIAL also provides a common IT infrastructure for all airlines.

The entire Bengaluru airport campus is being managed by fiber connectivity with redundant loops. As part of the disaster recovery implementation, the airport has ensured network and service redundancy. Arun S, the senior manager of ICT systems for BIAL says, “Our IT needs are completely different from that of other verticals. Even a second’s difference means a huge loss for us.” That’s why BIAL’s disaster recovery implementation is considered to be very critical by the airport authorities.

BIAL’s DR setup is very closely linked to its recent storage virtualization deployment. Currently, BIAL has gone for Hitachi Data Systems for virtualization of its storage infrastructure. Earlier, it had been using HP EVA (Enterprise Virtual Array) storage.


Critical airport applications


Replication for different applications is performed in a different manner in BIAL’s disaster recovery implementation strategy. Business critical applications are protected using synchronous replication. Applications critical for BIAL’s functioning include those such as the airport’s database—the repository for airport operations, ERP and middleware. BIAL’s DR site has replication of all business critical applications. However, it does not perform exact replication of the primary site.

Security plays a crucial part in BIAL’s disaster recovery implementation plan. It begins with physical security. The data center is protected by biometrics access and digital access. Each room is programmed such that only the concerned person has access. There is an in-house audit team that periodically checks for vulnerabilities. BIAL follows the standard ITIL process.

Work on recovery time objective (RTO) and recovery point objective (RPO) is underway at the moment as part of the disaster recovery implementation. Nedunchezhiyan says, “Right now there is tape-based back-up. We are planning to move to disk-based backups in order to improve RTO and RPO.”

The airport has stringent control on downtime since domestic flights dominate the day hours, and a large number of international flights operate at night. As a result, permission is required from every group for any sort of disaster recovery implementation. With the entry of GVK, managing airports scenario might change considerably.  As Arun mentions, the IT team might have to rework the entire IT DR strategy with DR site in a different place. But at the moment, that migration is a long way off for BIAL.

Labels: