Sunday, December 28, 2008

Cloud computing

Cloud computing

From Wikipedia, the free encyclopedia

Cloud computing refers to the delivery of computational resources from a location other than your current one. In its most used context it is Internet-based ("cloud") development and use of computer technology. The cloud is a metaphor for the Internet, based on how it is depicted in computer network diagrams, and is an abstraction for the complex infrastructure it conceals. It is a style of computing in which IT-related capabilities are provided “as a service”, allowing users to access technology-enabled services from the Internet ("in the cloud") without knowledge of, expertise with, or control over the technology infrastructure that supports them. According to a 2008 paper published by IEEE Internet Computing "Cloud Computing is a paradigm in which information is permanently stored in servers on the Internet and cached temporarily on clients that include desktops, entertainment centers, tablet computers, notebooks, wall computers, handhelds, sensors, monitors, etc."


Cloud computing is a general concept that incorporates software as a service (SaaS), Web 2.0 and other recent, well-known technology trends, in which the common theme is reliance on the Internet for satisfying the computing needs of the users. For example, Google Apps provides common business applications online that are accessed from a web browser, while the software and data are stored on the servers.


Architecture


Cloud architecture[49] is the systems architecture of the software systems involved in the delivery of cloud computing, e.g., hardware, software, as designed by a cloud architect who typically works for a cloud integrator. It typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services.[50]


This is very similar to the Unix philosophy of having multiple programs doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts.


Cloud architecture extends to the client, where web browsers and/or software applications are used to access cloud applications.


Cloud storage architecture is loosely coupled, where metadata operations are centralized enabling the data nodes to scale into the hundreds, each independently delivering data to applications or users.


What cloud computing really means

The next big trend sounds nebulous, but it's not so fuzzy when you view the value proposition from the perspective of IT professionals

By Galen Gruman , Eric Knorr
April 07, 2008

Cloud computing is all the rage. "It's become the phrase du jour," says Gartner senior analyst Ben Pring, echoing many of his peers. The problem is that (as with Web 2.0) everyone seems to have a different definition.


As a metaphor for the Internet, "the cloud" is a familiar cliché, but when combined with "computing," the meaning gets bigger and fuzzier. Some analysts and vendors define cloud computing narrowly as an updated version of utility computing: basically virtual servers available over the Internet. Others go very broad, arguing anything you consume outside the firewall is "in the cloud," including conventional outsourcing.


Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT's existing capabilities.


Cloud computing is at an early stage, with a motley crew of providers large and small delivering a slew of cloud-based services, from full-blown applications to storage services to spam filtering. Yes, utility-style infrastructure providers are part of the mix, but so are SaaS (software as a service) providers such as Salesforce.com. Today, for the most part, IT must plug into cloud-based services individually, but cloud computing aggregators and integrators are already emerging.


InfoWorld talked to dozens of vendors, analysts, and IT customers to tease out the various components of cloud computing. Based on those discussions, here's a rough breakdown of what cloud computing is all about:


1. SaaS

This type of cloud computing delivers a single application through the browser to thousands of customers using a multitenant architecture. On the customer side, it means no upfront investment in servers or software licensing; on the provider side, with just one app to maintain, costs are low compared to conventional hosting. Salesforce.com is by far the best-known example among enterprise applications, but SaaS is also common for HR apps and has even worked its way up the food chain to ERP, with players such as Workday. And who could have predicted the sudden rise of SaaS "desktop" applications, such as Google Apps and Zoho Office?


2. Utility computing

The idea is not new, but this form of cloud computing is getting new life from Amazon.com, Sun, IBM, and others who now offer storage and virtual servers that IT can access on demand. Early enterprise adopters mainly use utility computing for supplemental, non-mission-critical needs, but one day, they may replace parts of the datacenter. Other providers offer solutions that help IT create virtual datacenters from commodity servers, such as 3Tera's AppLogic and Cohesive Flexible Technologies' Elastic Server on Demand. Liquid Computing's LiquidQ offers similar capabilities, enabling IT to stitch together memory, I/O, storage, and computational capacity as a virtualized resource pool available over the network.


3. Web services in the cloud

Closely related to SaaS, Web service providers offer APIs that enable developers to exploit functionality over the Internet, rather than delivering full-blown applications. They range from providers offering discrete business services -- such as Strike Iron and Xignite -- to the full range of APIs offered by Google Maps, ADP payroll processing, the U.S. Postal Service, Bloomberg, and even conventional credit card processing services.


4. Platform as a service

Another SaaS variation, this form of cloud computing delivers development environments as a service. You build your own applications that run on the provider's infrastructure and are delivered to your users via the Internet from the provider's servers. Like Legos, these services are constrained by the vendor's design and capabilities, so you don't get complete freedom, but you do get predictability and pre-integration. Prime examples include Salesforce.com's Force.com, Coghead and the new Google App Engine. For extremely lightweight development, cloud-based mashup platforms abound, such as Yahoo Pipes or Dapper.net.


5. MSP (managed service providers)

One of the oldest forms of cloud computing, a managed service is basically an application exposed to IT rather than to end-users, such as a virus scanning service for e-mail or an application monitoring service (which Mercury, among others, provides). Managed security services delivered by SecureWorks, IBM, and Verizon fall into this category, as do such cloud-based anti-spam services as Postini, recently acquired by Google. Other offerings include desktop management services, such as those offered by CenterBeam or Everdream.


6. Service commerce platforms

A hybrid of SaaS and MSP, this cloud computing service offers a service hub that users interact with. They're most common in trading environments, such as expense management systems that allow users to order travel or secretarial services from a common platform that then coordinates the service delivery and pricing within the specifications set by the user. Think of it as an automated service bureau. Well-known examples include Rearden Commerce and Ariba.


7. Internet integration

The integration of cloud-based services is in its early days. OpSource, which mainly concerns itself with serving SaaS providers, recently introduced the OpSource Services Bus, which employs in-the-cloud integration technology from a little startup called Boomi. SaaS provider Workday recently acquired another player in this space, CapeClear, an ESB (enterprise service bus) provider that was edging toward b-to-b integration. Way ahead of its time, Grand Central -- which wanted to be a universal "bus in the cloud" to connect SaaS providers and provide integrated solutions to customers -- flamed out in 2005.


Today, with such cloud-based interconnection seldom in evidence, cloud computing might be more accurately described as "sky computing," with many isolated clouds of services which IT customers must plug into individually. On the other hand, as virtualization and SOA permeate the enterprise, the idea of loosely coupled services running on an agile, scalable infrastructure should eventually make every enterprise a node in the cloud. It's a long-running trend with a far-out horizon. But among big metatrends, cloud computing is the hardest one to argue with in the long term.


Galen Gruman is executive editor of InfoWorld. Eric Knorr is editor in chief at InfoWorld.


But What Exactly "Is" Cloud Computing?

By Kurt Cagle
December 17, 2008

If buzzwords didn't exist, the computer industry as we know it would collapse. Really! For instance, here's a quick pop-quiz -


1. Define Cloud Computing in twenty five words or less. Please show all work.

Er ... um ... it's well, it has to do with building virtual computers to host virtual services and support virtual communities while passing virtual messages to virtual ... um .... give me a second ... there's got to be a virtual something here.


Put another way, cloud computing is all virtual - it doesn't really exist!


Okay, so maybe this isn't the best position to take when covering cloud computing, but it does in fact provide a good starting point for understanding what cloud computing is and isn't. There are in fact two good working definitions - a very narrow one, and a much broader one. The narrow one first:


Cloud computing is grid computing, the use of a distributed network of servers, each working in parallel, to accomplish a specific task. As an acquaintance of mine put it, if it isn't using MapReduce, it probably isn't a cloud.


Of course, if we were to deal with this strict definition, then all the hype about "cloud computing" and the opportunities for companies to hawk their wares as "cloud friendly" simply wouldn't exist ... and where would the fun be in that? This is especially true given that there simply aren't that many problems even at the large enterprise level that require the use of "slow" massively parallel processing (i.e., processing distributed over networks that have a slow latency compared to processor speed).


The Era of Distributed Virtualization

Selling massive economic simulations would probably not find much of a market at this point in time, weather simulations are realistically only feasible if the grid is relatively self-contained. Hmmm ... you can process deep space satellite programs over the grid, of course, perhaps unfold a few proteins here and there, but chances are pretty good that most businesses just don't have the problems that make grid computing that attractive. So on to the broader definition:


Cloud computing is the distributed virtualization of an organization's computing infrastructure.


Now this is good market-speak - vague enough to have almost any possible meaning, with lots of multisyllabic words that sound really impressive on a power point slide (and you have to love the way that "distributed", "virtualization" and "infrastructure" got so casually tossed out there).


However, while this is perhaps a bit too broad as a working definition, it does in fact point to what seems to be emerging as the next major "platform". If you talked about cloud computing as distributed virtualization, you're actually getting pretty warm to a workable definition.


Much of the work of the last decade has been involved with moving from centralized architectures to distributed ones. Centralized architectures, such as the famed "client-server" relationship of yore, involved a spoke and hope arrangement, where multiple clients connected to a single server, and each server in turn communicated to more powerful "routers". Most applications weren't truly distributed ... instead, they existed as virtual server sessions within the server itself, with just enough state pushed to the client to handle very minimal customizations.


Put another way - the applications stopped at the server boundary.


Eventually, however, it became obvious that it was not that efficient to store your data on the same machine that handled the application logic. This translated into the first "distributed" applications, in which data was kept within a separate "data tier" in a different box, and the data access then occurred through an abstraction layer between the data tier and the logic tier. Client-server became three-tier, with messaging becoming an increasingly important part of the overall process.


Three tier rapidly became n-tier as different services began entering into the mix. One consequence of this shift to n-tier application development is that the messaging architecture continues to build in importance compared to the actual services being deployed - and the standardization of messaging in turn provides a powerful tool for services to simplify their underlying public interfaces to best work with that messaging format. Put another way, as messaging becomes more uniform, the services interfaces tend to become simpler in order to best work with these messaging formats - interfaces become abstract (or virtual).


From Virtual Machines to Commodity Computing

On the physical front, virtual machines have been developing in parallel to this messaging architecture. The concept of a virtual machine has been around for a while - build a "fake" machine that takes a specific set of commands from the applications that run on top of this, then convert these commands into instructions that the underlying machine can use. These became considerably more sophisticated, with applications like VMWare able to run one operating system within another.


The VMWare model is significant in a number of respects - by providing networking access and a virtualized driver model, then allocating a hard driver space as a virtual partition, VMWare was able to not only let someone create a virtual system, but also enabled the ability to take a "snapshot" of that system at any given time that could be saved and then run again at some later point. This meant that an application developer could effectively clone a "template" snapshot of a given application and distribute that as a virtual instance - you could literally have a completely functional, fully enabled system up and running in under a minute.


Other companies and projects took a different approach to machine virtualization. In essence, this approach involved building the virtualization capability into the host operating system directly (rather than run it as a secondary application) meaning that you could start with a "bare-bones" operating system then bring multiple machines online on the same piece of hardware.


In this case, the bare-bones system became known as a hypervisor (the next step up from a supervisor, presumably), specifically Type I Hypervisors. The XEN project uses a hypervisor approach, as does Microsoft Server Hyper-V system. VMWare type approaches, on the other hand, are typically considered Type II Hypervisors, because they run as applications within an advanced host operating system, rather than as a stand-alone operating system running in tandem with other stand-alone operating systems.


These systems were originally developed as conveniences for developers, letting them work on multiple systems simultaneously, but the whole hypervisor concept has taken off dramatically in the cloud-computing space. Typically how this works is that a company with spare processing capability sets up multiple large machines that might have many terabytes of storage, hundreds or even thousands of CPUs and hundreds of gigabytes of RAM.


These systems then use hypervisors to partition these meta-systems into distinct virtual machines that can be configured to any size or power. Unlike physical machines, such virtual machines can additionally be powered on or off without actually shutting down the actual server, and they can moreover have more memory or processing capability added simply by changing a configuration file.


There are downsides to this approach, of course. For starters, any hypervisored OS inherently is running two operating systems, even if one is only minimal, and the abstraction layer takes a certain number of cycles away from the actual processing - which means that a blazingly fast sea of processors still will produce only a moderately fast virtual machine. Additionally, bandwidth becomes a considerably more constrained resource, which makes hypervisored servers reasonable for doing web hosting, but fairly abysmal (and expensive) for hosting video and similar band-width intensive media.


A number of companies have, within the last couple of years, created "cloud computing centers" that take advantage of hypervisors and Storage Area Networks (SANs) in order to create hosted environments where business can effectively duplicate (or replace) much of their existing IT infrastructure. It can be argued that, by the narrower definition above, this is not technically cloud computing, as in many cases the virtual systems are in fact working simply as fairly distinct web servers, but this is the point where the marketing hype transcends the literal definition by just a bit.


Amazon was the first to really make the "cloud computing" model work, effectively, with the creation of the Amazon Elastic Compute Cloud (EC2) system. These use virtualization and a publicly available API in order to make it possible to bring one or a hundred virtual computers up simultaneously. This is complemented with their Simple Storage Service (S3), which effectively provides SANs for data storage. Their model is competitive (if a bit on the high side for some applications).


Microsoft also entered into this space recently with Windows Azure, which provides similar virtual Windows systems, along with a full complement of tools for building large scale distributed applications in this space. Sun, has effectively "re-entered" this space - their first efforts in cloud computing, the Sun Grid, attracted a fair number of customers but was somewhat ahead of its time, and as a consequence they have recently been re-promoting their own cloud credentials.


It should also be noted that many of the big hosting services have not been napping as cloud computing has caught fire. Voxel CEO Zachary Smith noted, in an interview with O'Reilly Media earlier this year, that companies such as Voxel, GoDaddy, and other large scale hosting services have been providing virtual servers at a much lower price point than their dedicated servers for a couple of years now.


Moreover, he is pushing strongly to get an industry-wide agreement on a common standardized API for creating server instances programmatically, possibly using the Amazon EC2 APIs as a model. In order for true commodity computing to come of age, a common industry standard will definitely need to emerge.


Cloud Computing Is Services Computing

You may have noticed the preponderance of the word "service" in the last section. This is not a curious coincidence. The upshot of virtualization is that you are effectively creating an abstraction layer for the hardware - in essence turning that hardware into software that is accessible through a programmable interface, almost invariably using the Internet as the messaging infrastructure.


There is a tendency in Cloud Computing to focus on the hardware, but ultimately, cloud computing is in fact the next stage in the evolution of services that has been ongoing for at least the last decade. The concept of Software as a Service (SAAS) is gaining currency at the small and medium sized business (SMB) level especially, where the advantages of maintaining an internal IT department is often outweighed by the costs. As the economic situation continues to deteriorate, SAAS is likely to become increasingly common moving up the business pyramid.


In a SAAS deployment, those applications that had traditionally been desktop office "productivity" applications - word processors, spreadsheets, slide presentation software and the like - are increasingly becoming web-based, though many are also increasing their foothold in the "offline" capability support that contemporary browsers are beginning to support, either built in or through the use of components such as Google Gears. Google Apps provides a compelling example of a SAAS suite, combining sophisticated word processing, spreadsheet and presentation software into a single web suite. Zoho offers similar (and arguably superior) capability.


Microsoft has recently debuted Microsoft Office Live Workspace, which effectively provides a workspace for working with common documents online, but raises the question of whether it is in fact a true cloud application as it still effectively requires a standalone version of Microsoft Office to edit these documents.


Salesforce.com has often been described as being a good cloud computing application, though its worth noting here that this application also shows the effects that cloud development has on applications. The Salesforce application feels very much like a rich CMS application (similar to Microsoft Sharepoint or Drupal, which also have cloud-like characteristics) dealing with complex dedicated document types.


Cloud Computing and RESTful Services

This concentration on document types itself seems to be an emergent quality of the cloud. Distributed computing really doesn't tend to handle objects all that well - the object oriented model tends to break down because imperative control (intent) is difficult to transmit across nodes in a network.



This is part of the reason why SOAP-based services, which work reasonably well for closed, localized networks (such as within the financial sector) as a remote-procedure call mechanism, don't seem to have taken off as much as they reasonably should have on the web. In general, distributed systems seem to work best when what is being transmitted is sent in complete, self-contained chunks ... otherwise known as documents, and when the primary operations used are database-like CRUD operations (Create, Revise, Update and Delete).


This type of architecture (called a REST architecture, for Representational State Transfer) is very much typified by the way that resources are sent and retrieved over the web, and effectively treats the web as an addressible database where collections of resources are key to working with the web.


A new, emerging model of cloud computing as a consequence is the RESTful Services Model, in which complete state is transferred from point to point within the network via documents while ancillary operations are accomplished through the use of messaging queues that take these documents and process them asynchronously to the transmission mechanism.


The SOAP/WSDL model is one that has taken off especially for financial and intra-enterprise clouds, though here the SOAP wrapper is used not as a flag to trigger specific tasks by the receiving system but as an envelope for queue processing (indeed, the RPC model that many early SOAP/WSDL proponents pushed has been largely abandoned as being too fragile for use over the Internet). Service Oriented Architectures (SOAs) describe the combination of SOAP messages and node-oriented services, typically with a bias towards intentional systems (systems where the sender determines the intent of the message, rather than the receiver).


A second model comes in the use of JSON - a representation of a JavaScript object as a mechanism for transferring state. This model works very effectively in conjunction with web mashups, though its over-simplicity of structure and lack of support for unicode (among other factors) makes it less than perfect for the transmission of semi-structured documents.


The third RESTful model is the use of syndication formats, such as Atom or RSS, as mechanisms for transmitting content, links and summaries of external web resources. Because syndication formats are in fact very closely tied to publishing operations, syndication formats tend to be fairly ideal for RESTful Services in particular.


One of the most powerful expressions of such RESTful Services is the combination of XQuery/REST/XForms (or XRX), in which you have a data abstraction model (XQuery) pulling and manipulating data from other data sources such as XML or SQL databases, a publishing (RESTful) layer and syndication format for encoding data (or data links) such as Atom and its publishing protocol AtomPub, and a declarative mechanism for displaying or editing whole documents on the client (XForms being the most well known, though not the only solution).


While this particular technology is still emerging, already vendors and project developers are working on building integrated solutions. Tools such as MarkLogic Server, the eXist XML Database, EMC/Documentum's X-Hive XML Server, Orbeon Forms, Just Systems' xfy system as well as similar by Microsoft, IBM, Oracle and others in the syndication space attest to the increasing awareness and potential for XRX-oriented applications.


The Edge of the Cloud

One of the more interesting facets about clouds is the fact that the closer you get to them, the harder it is to determine their edges. This is one thing that physical clouds share with their digital analogs - the edge of a virtual cloud is remarkably ambiguous. It's typical in a network diagram to use the convention that the edges of such clouds are web clients - browsers, web-enabled applications, in essence, anything that sports a browsable screen, is used by humans, and most importantly doesn't actually contribute material to the state of a given application.


However, this definition is remarkably facile. Consider an application such a Curl, which really has no real GUI, but is quite frequently found referenced by other applications. Or perhaps you could think of most contemporary browsers that support (or will soon support) offline storage. Both client and server have web addresses (though admittedly DHCP can complicate this somewhat), and certain web clients (typically physical devices) actually have built in absolute IP addresses - they can act both as clients and servers.


Put another way, the notion of web client and web server is slowly giving way to web nodes. Such a node may act as a client, a server, a transit point or all three. This is now increasingly true as AJAX-based web applications become the norm. What this means in practice is that in cloud computing, there really are no edges, but rather a fractal envelope that describes the stage where you have no further connection points - in this case, think of the overall outline (or envelope) of a tree - while individual branches may end within the envelope or be touching the envelope, none extend beyond it.


Is web programming part of cloud computing? Only in very abstract terms - generally, either when you're refreshing the overall state of a given document of content or when you're updating that state through XMLHttpRequest or other peer-to-peer communication protocols. It's more fair to say that most computer languages will eventually incorporate (through libraries or by themselves) cloud computing components ... indeed, most already do.


Languages such as Erlang have specifically evolved for use in asynchronous, multiprocessor, distributed environments that look suspiciously like clouds, while the MapReduce framework written by Google is intended to handle the processing of large amounts of data over clusters of computers simultaneously (which also highlights that while Google does not (yet) have a formal publicly or proprietary cloud, they have been laying the foundation for much of what is emerging as cloud computing within their own search intensive operations).


In a sense, cloud computing is an architectural concept, rather than a programming one per se. For instance, its probably fair to say that bit torrents, which use a peer-to-peer architecture for transmitting pieces of a given resource from multiple sources, represents a fairly typical cloud computing application - asynchronous, massively parallel, distributed, RESTful (torrents are not concerned with the content of the pieces, only their underlying existence) and virtual (the resource does not actually exist as a single entity, but has reality only in potential as many packets, some of which may be duplicates, and some of which may no longer actually exist on the web).


Clouds on the Horizon

It's interesting to note that this also leads full circle back to grid computing. Grid computing had its origins in applications such as SETI Online, which used the free cycles of participating PCs in order to analyze signals from radio telescopes to attempt to find apparently artificial, non-random signal patterns that may have indicated intelligent life.


Ironically, such use of free cycles has never really taken off beyond very specialized applications, largely because of the very real concerns for security. Cloud computing is far more likely to continue evolving, for at least some time, within massive proprietary or dedicated public clouds, rather than ad hoc networks, at least until a way can be found to monetize such ad hoc networks.


Overall, however, the future of cloud computing actually looks quite bright, perhaps because of the very storm clouds that have gathered on the economic horizon. Cloud computing provides ways to reduce the overhead of a formal IT department within a small to medium sized organization ... especially one for which IT is a significant expense.


For instance, a school district may choose to use virtual machines in setting up web sites, centralizing grades and reporting, host distance-learning systems, and so forth, and save not only on the need to physically maintain machines and bandwidth but also to add or remove servers as needed to reflect their demand.


Beyond the immediate advantage of reducing physical hardware cloud computing also has the added advantage of reducing the environmental costs associated with maintaining that infrastructure, along with the power costs.


For instance, the IT manager for a Postal District in Tacoma, Washington laid out to me one of the central problems with their growing IT usage - the building which housed the servers was not designed to handle the heat and electrical load of more than eighty servers, and they had reached a stage where they were seriously looking for better facilities. Instead, they began, server by server, to move non-critical servers to virtual counterparts using a hosted service provider. They kept the most critical servers local, but they were able to reduce their physical server needs by nearly 60%, and were able to put off looking for new facilities for the foreseeable future.


This does point out that, as with any IT strategy, migrating to virtual servers on the cloud makes more sense for non-mission critical functions, and any such strategy should also look at recovery and response time when outages do occur. The danger, of course, is that failure of a cloud center could have disproportionately bad economic effects. On the other hand, this is true of any large-scale IT deployment, and typically, because of these considerations, cloud centers are far more likely to have multiple redundancies in terms of power and backup in place such that if a failure does happen, the losses will be minimal.


This also applies to the ability of such centers to handle the environmental impacts of running such virtual IT centers. Virtual computers use considerably less energy per CPU cycle than physical ones do (most virtual computers actually are very efficient in terms of memory and processing allocation, because much of that is handled in RAM rather than in far more expensive disk access operations). Moreover facilities that host such systems are specifically designed for handling large numbers of servers running simultaneously by introducing much more efficient cooling and power draw systems than tends to be found in most IT departments.


This means that by virtualizing the least mission critical parts of your IT infrastructure on the web, you also can provide significant savings in terms of cooling systems, electrical infrastructures and facilities management that all translate to the bottom line.


This virtual world of cloud computing does, in fact, have some significant impacts on the real world ... and will have more as businesses become more comfortable with moving their services and their infrastructure into the cloud, as technologies for dealing with cloud computing improve and as standards and methodologies for developing for this new computing environment solidify.


What this means, of course, is that this particular cloud has a practically gold lining - and will chase the storms away.


Kurt Cagle is an author, developer, and online editor for O'Reilly Media, living in Victoria, BC, Canada.

Sunday, December 14, 2008

Unconventional Wisdom in a Downturn

Unconventional Wisdom in a Downturn

by Robert S. Kaplan, David P. Norton, Stewart D. Friedman, BV Krishnamurthy, Tamara J. Erickson, Jeffrey M. Stibel, and Peter Delgrosso

“What best practice challenges the conventional wisdom about what to do in a downturn?” We put that question to our team of management bloggers at harvardbusiness.org. Following is an edited selection of their provocative responses.

Protect Strategic Expenditures

by Robert S. Kaplan and David P. Norton

Many executives react instinctively during economic slowdowns by cutting discretionary spending across the organization. But such an indiscriminate slash-and-burn response is a big mistake because it fails to distinguish between short-term operational and long-term strategic programs. Unless the downturn threatens a company’s existence, executives should focus on rooting out operational slack and inefficiency, not on modifying or sacrificing strategic initiatives, which build capabilities for long-term competitive advantage.

To help companies preserve and strengthen their strategic programs, we developed a new expenditure category, strategic expenditures (or StratEx), to supplement the traditional capital and operational expenditure categories. We have found that unless StratEx are segregated from the other categories and protected, managers will view them as discretionary. Faced with short-term economic hardship, managers often defer or transfer funds from their strategic initiatives to achieve near-term financial targets—a principal reason why most organizations have so much trouble sustaining their strategy execution processes.

Two companies we have worked with have effectively cordoned off StratEx. Nordea, a leading bank of northern Europe, created a separate process for planning and funding its strategic initiatives. After the annual meeting that updates the company’s strategy, strategy map, and balanced scorecard, the executive team identifies the strategic initiatives required to achieve the performance targets on its scorecard. It then assigns one of its members to sponsor each project and funds those initiatives under a separate budgetary authority. The initiatives’ sponsors follow up with monthly progress reports to executive committees.

Ricoh, a manufacturer of office automation equipment, creates a strategic investment fund for projects not included in its normal operational and capital budget. Working from the company’s three-year strategic plan, business and functional units prepare and submit detailed proposals for funding the initiatives identified in their own respective plans. A team comprising the CEO and members of the strategy and planning office analyzes each proposal in depth and allocates capital from the strategic investment fund to the projects they deem most important. The CEO and strategy and planning office meet quarterly to monitor the progress of the projects.

During a downturn, companies attempt to eliminate the slack and inefficiencies accumulated during the recent growth period. But their attempts to cut fat and waste often slice into newly growing muscle, bone, and tendon. Creating a StratEx funding category helps companies continue to build capabilities for the future while eliminating the excesses of the past.

Dial Down the Stress Level

by Stewart D. Friedman

The knee-jerk response in an economic downturn is to wring greater productivity out of your workforce by making employees work harder. But this can hurt more than help, by fueling resentment and burnout. A smarter approach: Be open with employees about the business problems you face, and invite them to be part of the solution while encouraging them to meet critical needs in other parts of their lives. Do this right and you’ll reduce stress, decrease wasted time, boost trust, build resilience, and improve productivity.

Contrast three approaches you might take as the manager of a solid performer when times are tough:

• “Hey, Sarah, we’re having a bad year, so if you want any kind of bonus at all, you’re going to have to suck it up and work harder than ever before. Sorry, I know it’s tough, but that’s just the reality.”

• “Hey, Sarah, I know that there’s a lot of pressure on you now, on all of us, really, and I want to make sure you’re getting it all done. Let me know how I can help.”

• “Hey, Sarah, I know that there’s a lot of pressure on you now, on all of us, really, and I want to make sure you’re taking care of all the things that are important to you—not only at work but in other areas of your life, too—so that you don’t burn out. What small changes could you try here that would make things easier, so you’d have more energy to focus on performing well for our business? We desperately need your best efforts!”

The first option helps Sarah face the harsh reality and ties economic incentives to her performance. But you’ve not dealt with whatever other stressors Sarah is facing or explored what’s really driving her. That means the burnout risk is high, and the energy she might bring from her most powerful sources of motivation remains unused. The second option shows your empathy and desire to be supportive, but it’s so passive and vague that she probably won’t even be convinced that you’re serious about providing real support—much less be inclined to change her actions.

The third option has the greatest chance of producing the results you want because, my research shows, the more attention you pay to employees’ lives beyond work, the more you’ll get out of them at work—especially during times of great stress. If you acknowledge the pressure Sarah is under and show that you think about her as a whole person, you will most likely be rewarded with loyalty and extraordinary effort.

Of course, although you are sending the message that you value her ideas and are willing to try them, you are not telling Sarah she can do whatever she wants. Her experiments should be made up of little changes, like shifting her schedule to avoid rush hour, which she might try for a month or so, and they must benefit your business as well as her life beyond work.

Smart experiments are designed to produce what I call four-way wins. They’re intended to benefit work, home, community, and self (mind, body, and spirit) all at once. (For details, see my book Total Leadership: Be a Better Leader, Have a Richer Life.) When people undertake those experiments, they shift some of the attention they have disproportionately allotted to work and dedicate it to the other domains. The result is surprising: Satisfaction and performance in all domains, including work, goes up.

Use Downtime to Enhance Skills

by BV Krishnamurthy

In even the best of times, organizations often pay lip service to professional development. Excellent frontline workers might receive better titles or become project managers without actually having learned how to deliver a group’s work on time, assure quality, and stay within budget. Even experienced managers may be so preoccupied with quarterly, monthly, weekly, and daily reports that they have no chance to learn something new—or to unlearn what’s become obsolete. A downturn presents the perfect downtime to enhance the skills your people really need to excel.

“Are you kidding?” you might ask. “When times are tough, professional development is a luxury.” Not so. Often that’s precisely when there is enough breathing room in the daily work flow to give your people the chance to better themselves. Employees at all levels can be sent for training to improve their team-building, collaboration, process ownership, and other skills—which pays off when economic normalcy returns.

For example, in response to the economic downturn of 2000–2002, Alliance Business Academy started conducting annual team-building exercises at a top Indian software company. Working with two of the company’s 200 teams a year, ABA focused the training on endeavors such as completing joint tasks, clarifying group values, and improving team processes. Since the program started, trained teams have been 50% more productive, on average, than untrained teams, according to aggregated measures of quality, time, and cost. This year, the company is putting five teams through the program. ABA has replicated the results at a major aerospace company and a large European engineering conglomerate.

Such professional development pays off most with employees whose team skills are poor but whose impressive individual performance precludes letting them go. A joint research project between ABA and two European business schools has borne that out. In a study of 36 companies in the manufacturing, financial services, and transport sectors in five countries, star performers with poor team skills became change agents within their firms after going through two cycles of team-building exercises lasting 10 days each. Teach solo high performers how to collaborate better, focus on the big picture, and consider the organizational implications of their work, and you’ll reap sizable rewards.

Of course, casting a downturn as an opportunity to fine-tune skills is not easy. Various stakeholders’ anxieties about the short term need to be assuaged and framed in a long-term context. That includes the people whose skills are being improved. They need to have a broad enough view of how their professional development fits into organizational goals to be sufficiently motivated to make the downtime investment pay off. And they must be confident that the organization’s culture will tolerate honest mistakes as they progress and grow.

Those caveats notwithstanding, actively seizing a downturn as an opportunity can reduce the pain of the current one and can soften the blow of the next. Those are luxuries you can’t afford not to indulge in.

“Give Me the Ball!” Is the Wrong Call

by Tamara J. Erickson

Almost all executives I know feel the weight of obligation deep in their bones. They feel a duty to the owners of the business and to customers, of course, but perhaps an even greater one to the employees and families who depend on the company for their livelihoods.

So it’s no surprise that, in troubled times, many leaders believe it’s their job not only to call the shots but also to personally execute the key plays. That’s the nature of leaders. Faced with a crisis, executives often shout, “Give me the ball!” Executive instinct drives greater control—they review costs, tighten approval criteria, redirect key decisions to higher levels, ensure everyone is as busy as possible, narrow the business scope, and so on. Small teams of executives attend secret retreats to review options even as meetings that would bring all the troops together are canceled. As a result, authority becomes centralized.

What leaders frequently forget in the heat of crisis is that the wisdom of crowds applies within their own companies. Instead of hogging the ball during a downturn, they ought to tap the ideas and the energy of the entire organization. When times are tough, leaders should:

Ask great questions. Challenge the organization to respond by setting intriguing and complex goals. Don’t narrow the focus of your questions to the mundane or overspecify how teams should approach challenges. Articulate a compelling mission that will get people to rally.

Build trust across the organization. Don’t cut out meetings, intensify internal competition, or reduce investments in learning. Increase your firm’s collaborative capacity by building relationships and encouraging the exchange of knowledge (see “Eight Ways to Build Collaborative Teams,” HBR November 2007).

Challenge the status quo. Ensure that your team is regularly exposed to diverse points of view and experiences. Don’t cut travel or fall back on tried-and-true players. Bring in new voices and new ideas—and take them seriously. Get outside your business sphere. Encourage brainstorming and scenario analysis. Don’t abandon training and experimentation. Invest in your people.

Jorma Ollila followed these principles during his tenure as CEO of Nokia. Consistent with the company’s deep traditions of teamwork and global collaboration, he encouraged newly hired leaders to travel to form personal relationships with the diverse array of individuals around the world who would affect their performance. And he never sacrificed his goal of achieving a wireless information society.

Leaders can strive toward ambitious goals during tough periods at their firms and still manage to share obligations broadly. Downturns are no time to tighten control. They’re opportunities to inspire your people to become more spontaneous and innovative. Pass the ball.

Discounts Can Be Dangerous

by Jeffrey M. Stibel and Peter Delgrosso

During tough economic times, companies often rush to reduce prices on their products and services. That seems like common sense: People can’t afford to spend as much, so charge less to keep them buying. But discounting has its perils.

To be sure, discounting is effective when done wisely and strategically. It can get consumers excited about a product, encourage them to buy more, and help your short-term bottom line. However, whether the purchase is a hot dog, a handbag, or a stay at a five-star hotel, customers want good value for their hard-earned money. The price of something is often an important determinant of its perceived value, as Dan Ariely points out in Predictably Irrational. Often, the more consumers pay, the more value they ascribe to a purchase. If you discount prices purely to boost sales, buyers may begin to question that value.

Consider Abercrombie & Fitch, which lowered prices by roughly 15% during the 2000–2002 downturn. When the dust cleared, the company realized that it had sacrificed much of its brand’s cachet and lost significant market share. A&F didn’t recover until 2004—and then only after returning to higher prices. In August 2008, having learned its lesson, the company announced that it was considering another price increase, despite a decline in second-quarter profits. The goal: to enhance what the CEO called the “iconic status” of the brand.

But discounting is so easy that some companies simply can’t resist. Starbucks, which posted its first-ever earnings loss in July, has begun to offer lower-priced options, such as a cup of coffee for $1, with free refills. This strategy may boost sales in the short term, but we suspect that, as with A&F, it will hurt the Starbucks brand in the long term.

Discounting is not always a bad idea, though—there are safe ways to lower prices. Earlier this year, Chrysler discounted something that does not affect its core brand: gasoline. It guaranteed to purchasers of new cars a price of no more than $2.99 per gallon of gas for three years. The idea was to subsidize the fuel that a new car uses, not the car itself. It’s similar to what GM did in 2001 by discounting its financing rather than its cars. Obviously, the auto industry has more problems than brand deterioration. Nonetheless, this is smart marketing during a downturn: It couples the appeal of a discount with an implicit message about the value of the core product.

So if you’re eyeing a simple, traditional discount strategy during the present slowdown, first consider the potential for damage to your brand and then evaluate the brand insurance that a more nuanced approach may offer. If you inadvertently shatter your brand’s mystique, reestablishing the value proposition to consumers may be tougher than you expect.