Tuesday 19 March 2013

Seven Value Creation Lessons from Private Equity


What top-tier PE firms can teach public companies about creating and sustaining value over time.

Companies are in business to create value for their stakeholders, a pursuit that occupies countless hours in boardrooms and executive suites around the world. A select number of companies get it right — they set the correct value creation course and sustain it over time. But many do not. Some companies cannot find the right strategic path; others cannot execute their strategy. Still others execute well for a while but then lose their way. And another group of organizations become so exhausted by rounds of business transformation (that is, cost cutting) that they lack the stamina to search for additional ways to secure and sustain value.
For public companies, these challenges are intensified by quarterly reporting requirements, governance rules meant to drive accountability and transparency, and the demands of a vastly larger and more vocal group of stakeholders. Private equity firms, however, enjoy a number of natural advantages when it comes to building efficient, high-growth businesses, including a built-in platform for change (for example, a predetermined exit within 10 years), tightly aligned ownership and compensation models, and fewer institutional loyalties and competing distractions. But despite these distinctions, the best practices of top-tier PE firms still provide powerful and broadly applicable lessons. Public companies can adapt the following seven imperatives from private equity to build a value creation regimen.
1. Focus relentlessly on value. To attract continued investment from limited partners and earn the generous fees for which they are renowned, private equity firms have to maintain a laser-like focus on value creation, beyond simple financial engineering and severe cost cutting. More and more PE deals feature substantive operational improvements that result from the application of deep industry and functional expertise. Private equity firms are in the trenches at their portfolio companies, investing in core operations as often as they are cutting extraneous costs.
Private equity firms’ focus on core value begins with due diligence. General partners carefully choose each target company and explicitly define how they will create incremental value and by when. This assessment does not stop after the acquisition — they periodically evaluate the value creation potential of their portfolio companies and quickly exit those that are flagging to free up funds for more remunerative investments.
That can often mean exiting entire lines of business that are not drawing on the company’s core strengths and differentiating capabilities. Public companies should try to apply a similarly objective and dispassionate lens to their portfolio of businesses by assessing first their financial performance, and then the degree to which each employs mutually reinforcing capabilities that cross business unit lines and distinguish the enterprise as a whole.
2. Remember that cash is king. Private equity firms typically finance 60 to 80 percent of an acquisition with debt. This high-leverage model instills a focus and sense of urgency in PE firms to liberate and generate cash as expeditiously as possible. To improve cash flow, PE firms tightly manage their receivables and payables, reduce their inventories, and scrutinize discretionary expenses. To preserve cash, they delay or altogether cancel lower-value discretionary projects or expenses, investing only in those initiatives and resources (including talent) that contribute significant value.
Public companies can take a page from the PE playbook and develop a similar performance improvement plan. Although the specifics will vary from company to company, any such plan will focus on increasing profits and improving capital efficiency.
Public company leaders should start with a blank slate and then objectively and systematically rebuild the company’s cost structure, justifying every expense and resource. First, management needs to categorize each activity as “must have” (it fulfills a legal, regulatory, or fiduciary requirement, or is required to “keep the lights on”), “smart to have” (it provides differentiating capabilities that allow the company to outperform its competitors), or “nice to have” (everything else). The next step is to eliminate low-value, discretionary work. And the final step is to optimize the remaining high-value or mandatory work.
3. Operate as though time is money. Consistent with the imperative to generate cash quickly to pay down debt is the mantra among private equity firms that “time is money.” There is a bias for action captured most vividly in the 100-day program that PE firms invariably impose on portfolio companies during the first few months of ownership. PE firms have little appetite for the socialization and consensus building common at many large public companies — private equity firms and their management teams feel a sense of urgency and rapidly make decisions to change.
Granted, portfolio company executives are extraordinarily empowered and have close working relationships with their actively involved boards. They do not need to navigate layers of oversight or appease external stakeholders. Still, public company executives could learn a lot from the private equity firm’s need for speed. Waiting carries an opportunity cost that too many public companies inadvertently and unfortunately pay.
4. Apply a long-term lens. Private equity firms act with speed but without forsaking rigorous analysis and thoughtful debate. They typically have three to five years to invest their fund, providing time to carefully assess potential targets and develop an investment thesis. PE firms then have a window of about 10 years to exit these deals and return the proceeds to investors. Despite the occasional claim to the contrary, PE firms do not tend to “flip” investments.
After realizing the short-term cost benefit of eliminating low-value activities, the general partners can afford to invest in the long-term value creation potential of the companies they acquire. In fact, that is the only way they will secure their targeted returns upon exit — by convincing a buyer that they have positioned the company for future growth and profitability.
The best private equity firms not only cut costs but also invest in the highest-potential ideas for creating core value. The art and science of making these judicious choices is a capability that public companies can develop.
5. Assemble the right team. PE general partners intuitively understand that strong, effective leadership is critical to the success of their investment — in fact, they sometimes invest in a company based on the strength of its management talent. The assessment of talent begins as soon as due diligence commences and intensifies after closing. Once decisions are made, they are swiftly executed. One-third of portfolio company CEOs exit in the first 100 days, and two-thirds are replaced during the first four years. A private equity firm will act assertively to put the right CEO and management team in place, and may well draw on its own in-house experts or external network to fill talent gaps.
Talent management continues beyond the first few months after the acquisition and extends well beyond the C-suite. Pressured to do more with less, PE firms must continually reassess individuals in middle as well as top management positions and quickly remove or replace low performers. These same talent management tenets can apply to public companies.
6. Link pay and performance. The CEO and senior managers at a private equity portfolio company are deeply invested in the performance of their business — their fortunes soar when the business succeeds and suffer when it fails to achieve objectives. PE firms pay modest base salaries to their portfolio company managers, but add in highly variable and annual bonuses based on company and individual performance, plus a long-term incentive compensation package tied to the returns realized upon exit. This package typically takes the form of stock and options, which can be generous, especially for CEOs. A 2009 study in the Journal of Economic Perspectives of 43 leveraged buyouts pegged the median CEO’s stake in the equity upside at 5.4 percent, whereas the management team collectively received 16 percent of company stock.
Top managers receive their annual performance bonus only if they achieve a handful of aggressive but realistic performance targets, unlike bonuses at public companies, which have become an expected part of overall compensation irrespective of performance. PE firms will reduce or even eliminate bonus payments if an operating company fails to achieve its targets.
Not only does management participate in the upside in a private equity operating company, but it also shares in the potential downside. CEOs and certain direct reports have real “skin in the game” in the form of a meaningful equity investment in the acquired company. Because this equity is essentially illiquid until the PE firm sells the company, it reinforces the alignment between top management’s agenda and that of the PE shareholders, reducing any temptation to manipulate short-term performance.
Although public companies may not be able to match the equity-based rewards of a successful PE venture, they can create a tighter link between management pay and performance, particularly over the long term. Companies can stimulate a high-performance culture by strengthening their individual performance measures and incentives to align them with true value creation. The first step is to reform the performance review process so that it truly distinguishes and rewards star talent.
7. Select stretch goals. As discussed, top private equity firms manage their portfolio companies by developing and paying rigorous attention to a select set of key and customized metrics. PE general partners quickly assess what matters in driving the success of an acquired company and then isolate these few measures and track them. They set clear, aggressive targets in a few critical areas and tie management compensation directly to those targets. PE firms watch cash more closely than earnings as a true barometer of financial performance and prefer to calculate return on invested capital rather than fuzzier measures such as return on capital employed.
Many public companies are already following the private equity example by developing “dashboards” that track the key measures of business performance and longer-term value creation. Ideally, companies want to create a virtuous circle of performance measurement and management. The vision and long-term strategy should drive a set of specific initiatives with explicit objectives. These initiatives and their financial implications should, in turn, drive annual plans and budgets.
There are reasons that those who can afford the extravagant management fees continue to invest in private equity — the evidence shows that the best of these firms create economic value again and again, by implementing real and sustainable operating and productivity improvements at their portfolio companies. Although public companies do not enjoy certain liberties that highly concentrated private ownership affords, their boards and executives can learn from the better practices of PE firms, adapting them to the realities and constraints of their own business model to create additional and lasting value.

Predicting “Flash Crashes”


A controversial financial market indicator may be able to prevent short-term crises in the modern computerized trading world.

Shortly after 2:40 p.m. on May 6, 2010, a downbeat day for U.S. financial markets turned chaotic. Prices for the Chicago-based E-mini S&P 500 futures—the most liquid equity index contract in the world, with US$140 billion in average daily volume—fell rapidly. Before the Chicago Mercantile Exchange’s (CME’s) automatic stabilizer was triggered at 2:45 p.m., momentarily pausing trading, the Dow Jones Industrial Average had lost about 600 points (and was down nearly 1,000 for the day). After trading was allowed to resume, the markets recovered and regained most of the points within 20 minutes. 
The so-called flash crash of 2010 was the second-largest point swing, following the largest one-day point decline, in the history of the Dow. The crash seriously damaged the confidence of investors, who withdrew $19.1 billion from domestic equity funds that month—the highest monthly outflow since the peak of the financial crisis in October 2008. It also raised questions about the reliability and effectiveness of today’s financial market indicators, as well as the increasing role played by high-frequency traders, whose computerized programs have reduced transaction times to thousandths and even millionths of a second. (It’s been estimated that as of 2010, high-frequency transactions account for about 70 percent of U.S. equity trading, and the majority of U.S. stock market transactions.)
In the end, a joint investigation by the Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission reported no evidence of wrongdoing. It concluded that the crisis began when Waddell & Reed Financial Inc., based near Kansas City, tried to aggressively hedge its investment position by selling $4.1 billion in futures contracts in 20 minutes. This statement was met with widespread criticism; the CME itself issued a rare press release, expressing skepticism that one trade could have had so many ripple effects. Among the culprits that various experts have blamed are managers at Waddell & Reed, the technology, the stock market’s structure, and the evolution of the hedge fund industry.
But the flash crash didn’t just trigger a wave of recrimination. It led to a development with potential lasting impact on financial markets: A new proposed metric for future volatility, which its proponents say can be used to prevent short-term crashes in the modern computerized trading world. Such crashes occur suddenly and quickly spiral out of control. This new metric, volume-synchronized probability of informed trading (VPIN)—if it works as its creators claim—could become a crucial mechanism that uses probability analysis on past trading behavior to monitor imbalances in trading, predict future behavior, and alert traders when a crisis is imminent. Proponents say this warning signal would enable analysts or regulators to slow down or halt trading before getting sucked into a crash. Yet even if its critics are right and the metric isn’t reliably consistent, the debate over its usefulness has further demonstrated the need to address the vulnerabilities inherent in the financial system.
VPIN is designed to calculate “order flow toxicity,” or the probability that informed traders (such as hedge funds, which tend to all buy or sell at the same time) are taking advantage of uninformed traders (such as market makers, who typically lose money when order imbalances occur—that is, when there are so many buy or sell orders for a specific security that it becomes all but impossible to match all the orders). It uses real-time statistical analysis of trading behavior to estimate the relationship between informed traders’ orders and how much liquidity market makers are providing. If the order flow becomes too toxic from informed traders’ activity, electronic market makers will stop supplying liquidity to help stem their losses, which can create a cascading effect on other market makers and trigger an avalanche of similar withdrawals—the sort of frenzied activity that precipitates a flash crash.
The metric was developed by David Easley and Maureen O’Hara, economists at Cornell University, and Marcos López de Prado, head of high-frequency trading research at the Tudor Investment hedge fund. The trio have filed a patent, and have urged regulators to use VPIN as a watchdog. “Some traders are going to have superior information, for a variety of reasons,” says Easley. “And if you have people with good information, they tend to buy. It becomes imbalanced on the buy side. From the point of view of a market maker, that’s toxic, because the market maker’s job is to provide liquidity.” Using VPIN, Easley and his team retroactively calculated that the stock market registered some of the highest readings of toxicity in recent history an hour before the flash crash.
Not everyone agrees that VPIN will be the market’s savior. The most prominent critics are Torben Andersen, a professor at Northwestern University’s Kellogg School of Management, and Oleg Bondarenko, a professor at the University of Illinois at Chicago. In a paper published in October 2011, they argued that “our empirical investigation of VPIN documents that it is a poor predictor of short run volatility, that it did not reach an all-time high prior [to], but rather after, the flash crash, and that its predictive content is due primarily to a mechanical relation with the underlying trading intensity.”
One problem, they note, is that VPIN conflates trading volume and time: Because trades are grouped sequentially and then by regular clock time, the delineation between two days’ trading sessions is unclear. In short, they argue, VPIN is “highly dependent on when exactly you start counting trades. If you start counting one day later than someone else, your groups will contain different trades and your VPIN will be different.” Using a slightly different data set (which they argue is more accurate historically), the two researchers had to start in 10 to 15 different places before they could replicate VPIN’s results. The implicit concern is that VPIN could dupe investors and analysts into a false sense of security.
Andersen says he has no desire to get into a mudslinging match, and that his work on the subject is ongoing, but that he has “accumulated additional strong evidence that VPIN is not working as advertised.” Easley says his group’s research on VPIN also continues, and that Andersen and Bondarenko simply performed a fundamentally “different analysis—reasonable, but different.”
Meanwhile, regulators are increasingly concerned about their ability to keep up with the trend toward ultra-fast trading; in the spring of 2012, the SEC went before Congress to ask for a 2013 budget increase of $245 million, largely to protect investors and to “strengthen oversight of market stability, and expand the agency’s information technology systems.” Meanwhile, the SEC has taken note of VPIN’s possibilities. In late 2011, the agency assigned a group from the University of California at Berkeley to investigate VPIN’s promise, and in a working paper, the Berkeley group wrote that VPIN did “indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.”
Regulators and industry groups are also taking steps to humanize the world of computerized trading. In June 2012, a group of 24 brokers and traders sued CME Group, which owns major commodities exchanges in New York and Chicago, in an attempt to overturn new rules that cater to high-frequency traders. And in July, the SEC approved a new rule that will require exchanges and the Financial Industry Regulatory Authority to jointly devise a plan for the development of a consolidated audit trail, which would track every order, cancellation, modification, and execution of a trade for all listed equities across all U.S. markets.
Yet just one month after the SEC took this step, a mini flash crash occurred when Knight Capital Group Inc., whose market-making division handles about 10 percent of U.S. equity volume, lost $440 million and saw its stock plunge more than 70 percent after a “software glitch” that dumped a huge number of orders into the market. “Those kinds of things are inevitable because they’re computer programs,” Easley says. “There’s not going to be a perfect one. The question is, What are your controls?”
Several countries around the world are cracking down on high-frequency trading, most recently Australia, Canada, and Germany. But because this form of trading is still dominant in the United States, and likely to remain a key feature of the U.S. financial markets for the foreseeable future, the need for a strong indicator to keep the system in check and protect investors will only continue to grow. The debate over VPIN may be taking place largely in the ivory tower, but if the metrics of probability can truly tame the dangers inherent in computerized trading, then the potential consequences reach far beyond. 

Sunday 10 March 2013

SEBI issues regulations on sale of preference shares


Mar 10, 2013, 12.58 PM IST

The Securities and Exchange Board of India (SEBI) has introduced regulations overseeing the public sale of preference shares and will allow hybrid securities, which debt and equity components, to be listed on exchanges.

SEBI issues regulations on sale of preference shares
The Securities and Exchange Board of India (SEBI) has introduced regulations overseeing the public sale of preference shares and will allow hybrid securities, which debt and equity components, to be listed on exchanges.

SEBI said non-convertible redeemable preference shares sold by Indian issuers must have a minimum rating of "AA-minus" and a tenure of at least three years, according to its statement late on Friday.

Although Indian companies have previously issued preference shares, SEBI had not unveiled specific regulations covering the sale of these securities, which provide dividends and priority over stock investors in recouping investments in cases of defaults, but do not confer voting rights.

Private placements of preference shares will also be allowed to be listed in exchanges, SEBI said, a move that is intended to create a market for the trading of these securities.

Domestic banks will also be allowed to count some preference shares and perpetual debt instruments as part of their Tier I capital, after SEBI adopted the Basel III recommendations on the subject as part of the measures announced on Friday.

SEBI additionally simplified the registration process for stock brokers, allowing them to obtain a single certificate from an exchange to trade across all equity instruments.

Previously brokers had to register separately for each category of equity products, such as derivatives

Saturday 9 March 2013

Winning with IT in consumer packaged goods: Seven trends transforming the role of the CIO


Technology is increasingly fundamental to competitive advantage in the consumer-packaged-goods industry. IT leaders are stepping up to the challenge.

Consumer-packaged-goods (CPG) companies have traditionally viewed technology as a necessary business expense to be managed in the most efficient way possible. As IT spending increased over the past two decades, managers concentrated on standardizing IT systems across the company and reducing costs. Technology programs delivered on consolidating and integrating systems following mergers and acquisitions. Productivity flowed from improved supply-chain processes and from warehouse and plant-floor automation. But growing and differentiating the business through IT-enabled innovation was not a top priority for leadership teams.
However, during the past few years, CPG companies have grasped the commercial potential of the burgeoning supply of information about customers’ behaviors, needs, and wants. The volume of data emanating from point of sale, in-store engagement, mobile platforms, and social media is exploding and unleashing value from technology in ways that go beyond operational efficiency. This is leading to a fundamental change in what businesses expect from technology. Senior executives across all functions now realize that IT is capable of game-changing innovation and business transformation that can spur revenue growth, get products to market faster, and sometimes generate entirely new business models.
As information intensity grows, world-class IT in CPG companies requires more than just cost-effective service provision. Companies seeking to seize the commercial potential of technology will need to invest in key areas to keep pace not just with their competition but also with the expectations of their retail customers and information-savvy consumers. The possibilities created by technology should spark innovation in business processes and product offerings, and data-driven insights should help shape business strategies.
IT teams now have the opportunity to rise to the role of strategic enabler and differentiator. This will often require transforming the IT organization to bring new skills, new operating models, and new ways of engaging with the business.
Direct consumer relationships
CPG companies are increasingly using technology to create direct relationships with consumers. The popularity of private labels has been slowly eating into revenue, with consumers increasingly more conscious of price than brand. Indeed, the proportion of consumers who have returned to branded goods since 2008 has been lower than after previous recessions. Engaging directly with consumers can counteract this development by increasing loyalty and by improving insights on individual consumer needs, which can in turn lead to more accurate targeting of products and promotions. Many companies have found ways of going directly to consumers by offering them online services rather than just products. Kraft, for example, created a service available through a Web site and an app, much like a social network, where consumers share recipes. Johnson & Johnson’s BabyCenter provides an online community where parents share advice and product recommendations.
Most CPG companies now engage with consumers on external social-media sites. McKinsey’s research across 40 companies found ten distinct methods to interact with consumers on social media throughout the product decision-making process.1 These range from passive techniques, such as monitoring blogs and social networks for references about brands, to direct engagement in the form of targeted marketing, new-product introductions, or consumer outreach during public-relations crises. Coca-Cola, for example, monitors what consumers are saying about its products in real time. Coke was the first brand in the world to reach 50 million Facebook “likes,” while Diet Coke had 225,000 Twitter followers as of the end of 2012.
Weaving together the ability to maintain social brand presence, monitor consumer conversations, and respond in real time requires a complex and evolving set of technology solutions that look very different from traditional transactional CPG IT.
Mobile and location-based services
As smartphones and tablets proliferate in consumers’ pockets, in retail stores, and in the hands of the sales and service workforce, CPG companies are leveraging these new interaction models and connecting with retailers and individual consumers wherever they are and whenever they want. Mobile amplifies the impact of direct-to-consumer marketing. Levi Strauss, for example, uses social media to offer location-specific deals. In one instance, direct interactions with just 400 consumers led 1,600 people to turn up at the company’s stores—an example of social media’s word-of-mouth effect. When emerging technology capabilities make the mobile device the primary personal shopping tool, with which consumers discover, try, buy, and share experiences with new products, the cost of product launches will decrease and their impact will increase.
Richer information will come from using the camera and scanning capabilities of smartphones and tablets. By scanning codes on the product, consumers can get further information, such as advice on how best to use a product, recipes, complementary products to buy, and data on safety and sustainability. Known as “augmented reality,” these capabilities are undergoing trials today and could soon reach wider audiences.
Mobile solutions are also increasingly important within the enterprise itself. One CPG manufacturer, for example, equipped its merchandisers with tablet apps that use pictures and data entry to track on a daily basis how much shelf space it was allocated in comparison with the competition and whether retailers were complying with promotion agreements. By measuring and acting on this data, the company doubled its shelf space within a single region and increased retailer pricing and promotion compliance.
Predictive analytics
Consumer-goods companies have traditionally used historical performance, channel demands, and gut feel to determine price, promotions, assortment, and replenishment. Now companies are starting to turn to predictive analytics to refine this decision making. One CPG company mines a massive database of historical point-of-sale and promotions data, integrated with real-time data from social media and weather forecasts, to predict daily demand by store and optimize assortments and promotions in order to maximize sales and profitability.
Tesco systematically integrates analytics and consumer insights to build a sustainable competitive advantage. By analyzing data from its Clubcard loyalty program (which comprises more than 1.6 billion data points, ten million customers, 50,000 SKUs, and 700 stores), the retailer can better segment and target customer occasions.
P&G recently announced that it is increasing its analytics workforce fourfold. The company clearly believes that the way information is used in the business world is fundamentally changing and sees analytics as a core source of competitive advantage in the coming years. We expect many other CPG players to follow suit.
Demand-driven supply-chain management
Consumer-goods manufacturers are increasingly moving toward demand-driven supply-chain systems in order to minimize inventory levels, improve service performance, and reduce stock-outs. Adopting this approach has required companies to develop new algorithms to integrate near-real-time demand data with traditional forecasts and develop new IT systems to facilitate data sharing with customers and distributors.
One CPG company was able to capture more than £250 million ($377 million) in benefits and improve on-time delivery from 97 percent to 99.5 percent over three to four years by adopting customer-driven demand planning as well as integrating its manufacturing and logistics systems with best-of-breed customer-integration solutions.
Another example: a large grocery retailer led a predictive-ordering pilot with a CPG manufacturer that drove 15 percent growth in same-store sales for a flat category by improving assortment and eliminating stock-outs.
Initiatives like these reflect a broader trend among retailers who have invested in technology initiatives ahead of their CPG counterparts. When the retailer owns the algorithm and the data, it has more negotiating leverage over its suppliers. Furthermore, maintaining systems links with numerous retailer platforms could become highly complex and costly for CPG companies to manage and hinder the development of their own solutions. Unless CPG manufacturers begin to shape their own solutions, they will continue to be saddled with an increasing number of reactive and expensive one-off customer IT requirements just to keep up. Analytics and data aggregation can tip the balance of power back in their favor, however. While many functions may build analytics teams, IT must enhance its role in delivering the data and tools to enable these teams to execute efficiently.
Idea-to-product acceleration
For most consumer-goods companies, introducing new products faster, at lower cost, and with greater likelihood of market success is the constant but elusive goal. P&G has been a trailblazer in the use of technology for this purpose. It has adopted design tools to create realistic virtual prototypes, thus saving time in design iterations. Additionally, it has leveraged virtual-reality techniques to develop studios that simulate new products sitting on shelves in order to test design effects internally and with consumers. After an initial period of testing and refinement, such techniques are now being used in the development of more than 80 percent of P&G’s new products.
These kinds of tools can create value across the industry, and it is clear that the mainstream of the market has only scratched the surface of the potential.
Safety and traceability
With the increasing consumer and regulatory focus on safety and the resulting greater likelihood of product recalls, the ability to trace a product through the supply chain from the raw-goods supplier into the store has become more important. To accomplish this task in the most effective way, companies need to ensure they have good master data on products, as well as the right tools to tag and scan items in collaboration with their suppliers as products progress through the supply chain. Companies also need tracing functionality linked to product databases. One consumer health care company has integrated serialization-management software, data-carrier technology such as radio-frequency identification and bar codes, and additional authentication such as holograms and nanotags to trace its products through the various stages of the supply chain.
Sustainability
There is increasing consumer demand for transparency on how companies perform when it comes to sustainability and corporate social responsibility. One start-up, GoodGuide, allows consumers to browse safety, health, and sustainability ratings for more than 70,000 products.
Today, only 10 percent of public companies voluntarily publish their carbon-emission data, but that number is growing. One CPG manufacturer has differentiated its products by printing carbon-footprint information directly on product labels. Companies interested in adopting similar methods must first be able to track, manage, and analyze a tremendous amount of data throughout the supply-chain process.
Getting ready to win with IT
Exploiting the commercial and operational potential of technology-driven trends in consumer packaged goods will require the close cooperation of IT and business leaders, a sharp strategic focus, a fast and nimble way of working, and, often, new strategic and technical talent.
Integrating technology and business strategies entails a constant conversation in the context of a multiyear road map rather than the typical annual budgeting process.2 To truly shape the direction of the business in these technology-enabled domains, executives must engage in an ongoing dialogue to ensure that their technology strategy continually evolves and that they make the appropriate investments in advance of business demand. We believe that this dialogue should start with these critical questions:
  • Given our own business priorities and challenges, what are the two or three technology trends on which we want to focus?
  • For these chosen priorities, how do our current commercial and IT capabilities compare with best-in-class examples among competitors? How will they create value in the short and long term?
  • Precisely who in our organization is responsible for working on a technology-enablement strategy and ensuring its adoption? To what extent do business leaders take personal responsibility for the success of this strategy?
  • What capabilities must we have in-house to win, and where do we leverage the market? For example, should we seek help from “analytics as a service” providers to accelerate insights, or is this such a strategic capability that we must build it in-house?
  • How can we resource proof-of-concept efforts to show early impact and demonstrate potential to fellow business leaders?
Transforming IT
CIOs who seek to lead the business on a journey to capture value from these strategic technology-enabled opportunities should be ready to push through a wide-ranging transformation spanning several areas.
Aligning the leadership team
Senior executives should act as role models for the IT organization as a whole. Bringing in new blood can help by introducing experienced practitioners who can provide credibility in areas such as analytics.
Attitudes and actions need to change as much as personnel. It will take time to root out “order taker” attitudes and instead instill the mind-set that IT is going to bring ideas and challenge business and functional leaders on whether they are getting the most from their information assets and technology capabilities. Leaders will need to spot and celebrate examples of the right approaches. IT teams cannot simply assert they are now enabling the strategy; they must show how they’re doing it.
To help facilitate this process, leaders may need to revisit governance that is designed for budgetary control rather than building strategic capabilities. Investments in foundational capabilities in analytics and management of big data will drive benefits across functions but will remain hard to fund if every stakeholder is just looking at his or her slice of the budget pie.
Building a nimble operating model
This kind of review will almost certainly identify and unlock demand for investing in a wide range of valuable opportunities—more than the organization may have historically pursued. The operating model in IT needs to be prepared to handle that demand growth and deliver value at the pace that increasingly fast-moving markets demand.
CPG IT needs more agility in order to achieve the shortest possible time lags between concept and deployment. A strong “test and learn” culture, with a laser focus on business outcomes, is essential. This shift in operating model can be likened to the difference between the methodical plan, build, and deploy cycle of enterprise-resource-planning (ERP) system development and the daily production batches of a Web-services company. This change cannot happen overnight, but we’ve found that productivity improvements of 30 percent or more can be unlocked from traditional development organizations. Flexibility in sourcing, vendor management, and talent management will support this. However, outsourcing relationships designed for a more stable world can at times be a constraint rather than an enabler.
Recruiting relevant technology and strategic talent
Carrying out the necessary IT transition will not be possible without the right people in technical and strategic roles. In general, today’s CPG IT organizations often lack resources with the requisite technical skills in analytics, mobile technology, programming, and user-interface design. While some may have a few team members who are gaining deep expertise, they lack the capacity to deliver at scale. This will require the development of new centers of excellence and new sources of talent.
Most CPG players will need to up their game in data architecture, governance, and management. With a move away from multiyear ERP programs toward projects with shorter cycles, it will be increasingly important to have a solid data foundation that can be reused to avoid unnecessary complexity and achieve scale benefits. Some companies are considering creating a chief-technology-officer or chief-data-officer role to indicate the importance of these areas in the consumer business.
Piloting new systems
New systems and tools can help build a learning and innovation culture. These might include platforms for social monitoring and insight, such as Buddy Media, NM Incite, and Radian6, or tools for managing big data, such as Cloudera and DataStax.
Leveraging such tools early in the adoption cycle will require openness to working with beta products and start-ups as a complement to the established technology partners most IT organizations depend on today. Additionally, IT organizations will need to devote time to scan for emerging solutions and develop new engagement models to conduct pilot programs and bring solutions to market. This means setting aside valuable funding and talent for innovation projects that IT will govern, which is not something easily achieved in most technology organizations today.
Information is becoming the lifeblood of the CPG industry. The demand for technology solutions to enable data-driven decision making will only increase. This is a historic opportunity for IT leaders to drive a true step change in creating value for the business. But staying ahead of the game will require nothing less than a transformation of the IT organization’s mind-set and operating model.
About the Authors
Sirish Chandrasekaran is an associate principal in McKinsey’s San Francisco office, Robert Levin is an associate principal in the Boston office,Harry Patel is a consultant in the New York office, and Roger Roberts is a principal in the Silicon Valley office.

Driving the top line with technology: An interview with the CIO of Coca-Cola


Ed Steinike is reshaping his CIO role as The Coca-Cola Company accelerates its use of technology innovations in operations, marketing, and sales.

Ed Steinike, vice president and CIO of The Coca-Cola Company, has set his mind on being what he calls a “revenue-generator CIO.” In this interview, he talks about his department’s journey from back-office function to business partner and how it uses technology to cultivate direct relationships with customers and to develop a demand-driven supply chain. Finally, Steinike describes a fledgling innovation that integrates most of the technology-driven trends in the consumer-packaged-goods industry.
 
McKinsey: How is the role of IT changing at Coca-Cola, and, with it, your role as CIO?
Ed Steinike: IT and marketing are very close partners at Coca-Cola today—more so, I think, than at most other companies—and that’s the way it should be. Coke is spending hundreds of millions of dollars a year on digital marketing, and that number will, no doubt, continue to rise. Almost all of that spending is IT-related. This development calls for a broader CIO role. It’s not enough to be an operational back-office CIO running the systems. It’s also not enough to be a process CIO reinventing the supply chain and transforming support functions. Important as those two roles are, they need to be complemented by what I call the revenue-generator CIO or business-level CIO.
McKinsey: What were the beginnings of the strategic partnership between marketing and IT at Coca-Cola?
Ed Steinike: Our marketers started to think more seriously about digital channels five years ago or so. As mobile adoption expanded, they started to build a direct connection with our customers by pushing mobile applications for social-media sites and our loyalty programs, such as My Coke Rewards.
Marketing was driving a lot of it through its own advertising and digital agencies while IT, at the time, was struggling to be relevant. We were viewed as a back-office function, not as one of the strategic leaders and partners in our digital-marketing efforts. I believed we should be bringing ideas to marketing instead of marketing coming to us for creative solutions and more often than not getting the answer, “Sorry. We don’t have the people to do these things.”
Our first step was simply to offer traditional operating, hosting, and security for the sites and platforms the agencies were building. We did that quite well and now have over 600 consumer sites hosted in one platform environment with great data protection.
McKinsey: What did it take to get to the level of business partner, to get to the point, for example, where you were coming up with cool mobile apps and connecting them with consumer-relationship programs?
Ed Steinike: It’s all about people. Just like Coca-Cola’s marketing organization, which hired some really smart people in the field of digital and interactive marketing, we started to recruit talented IT people who were more entrepreneurial, a little more strategic in their thinking, and who connected better with what marketing was trying to achieve. As one example, my enterprise architect is based in Silicon Valley with his team—closer to where the solutions are likely to come from.
That said, we still have some way to go when it comes to getting young people with a different kind of mind-set. We used to bring in 35 IT interns each year, but we hired none of them despite the great work they did, because our focus was on seasoned hires, for example, business and systems analysts and project managers. We certainly must have experienced people for big systems applications and the like, but for application-development work using software as a service, an entry-level hire may be just fine.
We’re now hiring five of our interns each year, and it’s amazing what they can do. They look at the world differently, and they come up with new answers. They help us build a new culture in which IT is a better business partner. It will take years to complete this cultural shift, but it will only happen if we address the people side of it.
McKinsey: What is the IT department doing today to cultivate direct consumer relationships?
Ed Steinike: Recently, for the 2012 Summer Olympics, we created mobile applications tailored for over 100 countries and available in the Android and iPhone stores, in order to create a digital-marketing event around the globe that boosted our impact well beyond our traditional sponsorship and television advertising channels. The IT department built some of the applications and managed others created by external agencies or our consumers. When content comes from thousands and thousands of sources, it requires a complete ecosystem. We’re now running content-management systems, digital-rights-management systems, digital-access-management systems, and mobile-distribution systems. Packaging content and distributing it around the world is a very big area for us right now. Today, digital marketing is a joint activity in our company, with marketing in many cases looking directly to us for better ways to reach our customers.
McKinsey: Having demand-driven supply-chain systems is a trend in consumer packaged goods. What is Coca-Cola doing in this field?
Ed Steinike: It’s a very important area for Coca-Cola. We’ve been working hard the last couple of years to integrate our plant and distribution systems to make it possible for us to see exactly what’s happening with our products as they move through the supply chain. One critical benefit is to ensure that we can minimize out-of-stocks. Imagine that we direct our Facebook fans to a local outlet with a targeted promotion and the product isn’t available. We’ve lost a sale and had a negative impact on relationships with our consumers. The inventory at the back of a store is pretty visible, but we lose track at the shelf point and the cooler point. We’re experimenting with some interesting methods to fill that gap, such as radio-frequency identification and electronic tagging of our products.
Interestingly, we have a pretty cool solution to this in the United Kingdom, where we have merchandisers take pictures of our shelves and coolers when they come into stores to talk about orders, promotions, and so on. We spent a lot of time trying to automate the processing of information found in the photographs, but it turned out that a better solution was to send the photos to a company in India: its staff studies the shots and in less than a minute gets back to us with stock counts of each product. It’s a nice blend of technology and human process. Is there a better solution? We’re still experimenting.
McKinsey: What is the best example of IT’s new role at Coca-Cola?
Ed Steinike: Coca-Cola Freestyle, our revolutionary fountain dispenser, integrates most of what the IT department is up to and also points the way toward a technology-driven future for beverages that might be quite different from the present landscape. Earlier fountains were basically mechanical machines. Coca-Cola Freestyle is effectively a complex and sensitive enterprise-resource-planning environment. A computer embedded in the new fountain machine calculates with surgical precision the ingredients of over 100 different beverage brands. To begin, consumers experiment a bit with various brands until they find one they really like. When they do, we find that they come back to our fountains for that particular drink and this leads to increased same-store sales.
The computer records all the data involved in every single pour. Each fountain knows when it’s running low on certain products. We are also using automated ordering in many Coca-Cola Freestyle locations whereby the fountain can build its own orders for supplies and place them directly into the system. It would even optimize the order so that you pay the lowest possible delivered cost. There are other things we can do with the operational data, such as working with the owner of the fountain location to talk about, for example, what drinks are moving at certain times of the day and, as a result, potential opportunities to adjust pricing and promotions. Broader marketing data represent another area. Is there, for example, a particular drink that happens to be selling really well in a particular region, country, or city?
We have visions of how we will use the data as we deploy thousands and thousands of the machines in locations such as restaurant chains, entertainment venues, and retail stores. We’ve got 50 million–plus fans on Facebook. We’ve got some 18 million people on My Coke Rewards. If we could bring these audiences together around Coca-Cola Freestyle, we could learn some really cool things.
McKinsey: A final question, what’s your advice to a CIO starting out in the consumer-packaged-goods industry?
Ed Steinike: My advice is that there’s an interesting shift going on in the world of consumer packaged goods, and IT has to stay very close to the new trends if it wants to be relevant. If you’re comfortable being an operational CIO you’ll still be needed, but I don’t think you’re going to help your company grow as fast as it could.

Tuesday 5 March 2013

The RBI’s guidelines for new bank licences


To enter the banking business

a minimum track record of 10 years

RBI has……
laid down an elaborate 'fit and proper' criteria
not excluded any category like brokerages, real estate companies from entering into the banking space.
paved the way for corporate houses like Anil Dhirubhai Ambani Group, Larsen & Toubro, Tatas, Mahindra and Mahindra, Life Insurance Corporation and Aditya Birla Group to enter the banking business.


"Entities/groups should have a past record of sound credentials and integrity, be financially sound with a successful track record of 10 years,"


The minimum paid-up capital for setting up a bank has been pegged at Rs 500 crs.
The cap on the foreign investment, including FDI/FII and NRI, has been set at 49%.

Norms notified by RBI....



on receipt of licence, promoter has to start operations within one year and list the company within three years of commencement of the business.
 Also, new banks should open at least 25 % of branches in unbanked rural centres.

Applications to RBI...



Those seeking to set up a bank would have to submit applications by July 1, 2013.
 The RBI will display names of applicants on its Website.

Feedback about applicants......



Before granting licences, RBI would seek feedback about applicants from other regulators, enforcement, investigative agencies like I-T Department, CBI, ED, as deemed appropriate.


Some Information……


At present, there are 26 public sector banks and 22 private sector banks. 
Only 35% of India's adult population has accounts with banks and other financial institutions as compared to a global average of 50%. 
It is 41 per cent in case of developing economies.

After grant of licence….

Following the grant of licence, the promoter group, which could be a public sector entity as well, will be required to set up a wholly-owned Non-Operative Financial Holding Company (NOFHC).

Purpose of NOFHC...

The NOFHC is aimed at protecting the banking operation from extraneous factors like other business of the Group i.e., commercial, industrial and financial activities not regulated by financial sector regulators.

Saturday 2 March 2013

Impose limit on global transactions of cards, RBI tells banks

Asks banks to impose limit of $500 on all global cards, refrain from issuing such cards 



In order to check frauds, the RBI has asked banks to impose monetary limit for international transactions on credit and debit cards and refrain from issuing cards with global access unless specifically sought by the customer.

"All the active magstripe international cards issued by banks should have threshold limit for international usage. The threshold should be determined by the banks based on the risk profile of the customer and accepted by the customer by June 30," it said.

Till the time of completion of the process a threshold limit not exceeding $500 may be put in place for all debit cards and all credit cards that have not been used for international transactions in the past, it said.

The notification has been issued following cyber attacks, which according to RBI has become "more unpredictable".

"Electronic payment systems are becoming vulnerable to new types of misuse, it is imperative that banks introduce certain minimum checks and balances to minimise the impact of such attacks and to arrest or minimise the damage," it said.

Besides introducing additional security features in the card, banks would also be required to put in place a real time fraud monitoring system and a mechanism to ensure that cards can be blocked through an SMS by the cardholder.

These initiatives, RBI said are needed to ensures that transactions effected through such channels are safe and secure and not easily amenable to fraudulent usage.

The announcement comes in the backdrop of a slew of card frauds that has taken place in the recent past leading to unauthorised withdrawal of sums by unscrupulous agents.









With small saving corpus shrinking, states go for higher mkt borrowing

The net outflow under NSSF rose to Rs 13,600 crore in H1FY13, against Rs 6,500 crore in H1FY12, according to Icra

Small savings, which once constituted a major part of borrowing by states, are losing sheen, as reflected from the profile of borrowing mix of state governments.

Stock of National Small Saving Fund (NSSF) has been consistently declining over the last few months, data from Reserve Bank of India and a research report from credit rating agency Icra shows.

Even states such as West Bengal, which have been traditionally major contributors to the overall small savings mobilisation, have lost the edge to high-yielding saving instruments like chit funds, turning the corpus into negative. Recently, West Bengal Small Savings Development Officers Association demanded action against mushrooming chit fund companies in the east due to the steady decline in collections. According to Gautam Deb, leader of CPI(M), the small savings and post office collections in West Bengal during the April-October 2012 period were merely Rs 194 crore, against the targeted amount of Rs 8,370 crore.

However, data from various sources shows that the sharp fall in small savings is a national phenomena now. The Mid-Year Economic Analysis 2012-13 by government of India indicated that the net outflow under NSSF rose to Rs 13,600 crore in H1FY13, against Rs 6,500 crore in H1FY12, according to Icra.

Further, with RBI slashing interest rate, it is expected that the interest rate on small savings might also come down.

“While a lower policy rate would transmit to lower bank deposit rates, it is possible that the rates on small savings instruments for 2013-14 may be revised downwards during the annual review, dampening the relative attractiveness of the latter,” predicts ICRA

Around 2009-10, when the policy rates were low, small savings were an attractive option as bank deposits rates were lower. Thus, ICRA points out that the data published by the RBI indicated that the stock of NSSF loans of the 28 Indian States rose by Rs. 23100 crore  in 2009-10 and nearly Rs. 40000 crore  in 2010-11. By March 2012, the stock of NSSF loans by Rs 8200 crore in 2011-12. Thus, there was a net depletion of Rs 8,200 crore in the small savings corpus due to higher outflows than in inflows, explains Jayanta Roy, analyst at ICRA. 

Notably, the mandatory allocation of net small savings collections to the states was reduced to 50 per cent from 80 per cent from 2012-13 onwards. States can exercise the option of either 50 per cent or 100 per cent of the net collections in their own territories.  About 16 of the 28 states have opted to avail 50 per cent of net small savings collections in 2012-13.

According an August 2012 press release by the government, the net small savings collection in 2009-10 was about Rs 64300 crore, which came down to Rs 58600 crore in 2010-11, and went into negative by Rs 1900 crore in April –June period of 2012-13 fiscal.

Some of the popular small savings instrument include Public Provident Fund (PPF), National Savings Certificate and Post office savings schemes. To boost the popularity of small savings, the government had fixed the rate of interest on most small  savings schemes in alignment with G-Sec rates of similar maturity, among various measures. While bank deposits offer up to 50-100 basis point higher interest rate than small savings, higher investments in gold and real estate have also led to shrinkage in small savings.

“The rate of savings itself is coming down, so there will be a decline in all forms of savings. Moreover, there has been more interest in investments like gold and real estate because it is seen as a hedge against inflation.  Till the inflation does not come down, the savings rate will continue to be low,” said  Devendra Pant, Director, India Ratings & Research, Fitch Group Company

With the NSSF corpus declining, the states opted to raise funds through market borrowing or state development loans (SDL) through the Reserve Bank of India window.  In 2011-12, the net funds raised by the 12 states in the ICRA sample SDL increased by a steep 65 per cent in 2011-12 and a further 27 per cent  in April-December 2012 in year-on-year (y-o-y) terms, the report says.