6 Reasons Solid State Memory Is The Biggest Story In Computing
Guest post written by Narayan Venkat
Narayan Venkat is VP, product management, at Violin Memory.
Make no mistake: the sudden boom in solid-state storage is no flash in the pan.
Storage systems based on solid-state flash memory now compete directly against traditional systems using spinning hard drives for mission critical jobs in data centers, particularly at financial institutions and Web companies. Venture capitalists and strategic investors have been injecting capital into the market, fueling a steady clip of multimillion dollar acquisitions. Fusion-IO, which makes flash accelerators for servers, held one of 2011’s most successful IPOs.
Arguably, we’re witnessing the biggest shift in the industry since hard drive-based systems nudged tape storage into the background.
But why is it happening now? And why is the acceptance of flash-based systems occurring so rapidly? Flash memory, after all, isn’t new: Fujio Masuoka led a team at Toshiba that invented flash in the mid-’80s. Flash remains a standard in mobile phones for and consumer electronics and manufacturers have produced flash memory chips in large volumes for years.
The hard drive industry hasn’t hit a plateau either, creating a vacuum for flash to fill. Drive manufacturers have continued to push the pace of innovation in their industry. Both hard drives and flash, in fact, will both likely grow rapidly over the next decade.
Some analysts believed that we’d see a huge uptick in flash notebooks years before solid state memory came to the data center. Instead, the reverse happened: here are six reasons why.
Flash is inherently faster at most data retrieval tasks than drives. Up to 95 percent of the latency can be reduced. Drives are mechanical devices with motors and moving parts: flash operates with electrical signals. To achieve a competitive edge in speed, customers need flash arrays and accelerator cards.
The stresses caused by this process, though, have meant that flash chips historically have lived short lives. That’s not all. The protocols for writing and erasing data, as well as monitoring for errors or performance optimization, can consume several computing cycles if not properly managed. Simply swapping in flash memory for conventional hard drives in a computer or enterprise-class storage system can lead to worse performance.
To get around these problems, some flash systems companies have designed software and/or semiconductors from the ground up. These new technologies can minimize the number of read-write cycles flash memory chips in a storage array must endure, thereby extending its life from several months to several years, or manage the chips in a way to circumvent over-use. They can also schedule “janitorial” tasks that flash chips must perform to avoid glitches in performance. If you wondered why my company is called Violin Memory, now you know: it’s because our technology conducts an orchestra of semiconductors.
Narayan Venkat is VP, product management, at Violin Memory.
Make no mistake: the sudden boom in solid-state storage is no flash in the pan.
Storage systems based on solid-state flash memory now compete directly against traditional systems using spinning hard drives for mission critical jobs in data centers, particularly at financial institutions and Web companies. Venture capitalists and strategic investors have been injecting capital into the market, fueling a steady clip of multimillion dollar acquisitions. Fusion-IO, which makes flash accelerators for servers, held one of 2011’s most successful IPOs.
Arguably, we’re witnessing the biggest shift in the industry since hard drive-based systems nudged tape storage into the background.
But why is it happening now? And why is the acceptance of flash-based systems occurring so rapidly? Flash memory, after all, isn’t new: Fujio Masuoka led a team at Toshiba that invented flash in the mid-’80s. Flash remains a standard in mobile phones for and consumer electronics and manufacturers have produced flash memory chips in large volumes for years.
The hard drive industry hasn’t hit a plateau either, creating a vacuum for flash to fill. Drive manufacturers have continued to push the pace of innovation in their industry. Both hard drives and flash, in fact, will both likely grow rapidly over the next decade.
Some analysts believed that we’d see a huge uptick in flash notebooks years before solid state memory came to the data center. Instead, the reverse happened: here are six reasons why.
- Latency
Flash is inherently faster at most data retrieval tasks than drives. Up to 95 percent of the latency can be reduced. Drives are mechanical devices with motors and moving parts: flash operates with electrical signals. To achieve a competitive edge in speed, customers need flash arrays and accelerator cards.
- Technology
The stresses caused by this process, though, have meant that flash chips historically have lived short lives. That’s not all. The protocols for writing and erasing data, as well as monitoring for errors or performance optimization, can consume several computing cycles if not properly managed. Simply swapping in flash memory for conventional hard drives in a computer or enterprise-class storage system can lead to worse performance.
To get around these problems, some flash systems companies have designed software and/or semiconductors from the ground up. These new technologies can minimize the number of read-write cycles flash memory chips in a storage array must endure, thereby extending its life from several months to several years, or manage the chips in a way to circumvent over-use. They can also schedule “janitorial” tasks that flash chips must perform to avoid glitches in performance. If you wondered why my company is called Violin Memory, now you know: it’s because our technology conducts an orchestra of semiconductors.
- Inertia and The Innovator’s Dilemma
Because it took a lot of work and quite a bit of risk. The new generation of flash companies had to sit down and develop new semiconductors and software platforms. Silicon in Silicon Valley: how 1980s can you get? It took money, time and creative thinking. I’m not saying EMC isn’t capable of that. I’m saying that when the executives at the large companies looked at the risks and rewards, they chose to continue to sell yesterday’s products.
Most investors and VCs did the same thing. How many VCs have you heard proclaim that they want to invest in hardware companies facing challenging technological hurdles that might take a few years of trial and error to solve? Most turned to social networking instead. They missed out.
- Energy
As a result, companies are faced with giving up growth or getting more efficient. Google, Microsoft, Yahoo and others have begun to build data centers in frigid locations like Buffalo, New York, and Finland to chop their air conditioning bills.
Flash arrays consume a fraction of the energy of drive-based systems. Research firm iSuppli once estimated that replacing just 10 percent of short-stroke hard drives, which account for only a tiny fraction of drives in data centers, with basic, relatively unsophisticated solid-state drives could save 57,000 megawatt hours of power a year. Drives are mechanical systems. Not only do they consume more power directly, they require more AC.
- Big Data
The answer is, all of the above. You are going to see insurance companies overhaul their actuarial processes: policies will be issued with an eye toward crop futures and oil prices. Real estate companies will create markets for parking spots. Retailers will parse digital video streams for data about how consumers, and different segments of consumers, shop their stores. Running these applications will require high-performance data retrieval and storage systems. Otherwise, the results could take decades.
- Volume Economics
What happens next? The storage industry is massive so you won’t see the market suddenly shift completely to flash. But the cost and performance benefits of starting the migration now are more than apparent. Some new capabilities—such as mixing flash with virtualization technology to enhance the economics of a solid-state array—will begin to be discussed more frequently among CIOs.
Any way you slice it, flash is fast becoming a debate you can’t ignore
IT-centric enterprise BI models unsustainable, says Forrester
Fast-changing business intelligence requirements drive need for self-service BI
June 15, 2012 06:00 AM ET
Computerworld -
Enterprise business intelligence models that are too heavily IT-centric
are unsustainable, a new report from Forrester Research cautioned this
week.
Increasingly, businesses that want to develop robust
business intelligence (BI) capabilities will need to adopt self-service
BI tools and methodologies in order to succeed, Forrester noted.Two major factors are driving the need for self-service BI.
The first is that BI requirements change faster than IT's ability to keep up. Even IT organizations with the latest tools and best practices often have to struggle to keep up with business requirements for BI applications, Forrester researcher Boris Evelson said in the report.
Mobile Video
Phones contain a significant amount of enterprise data. Learn how to configure and secure them centrally.
The other major issue is that conventional approaches to software development are poorly suited for today's BI needs, he said. "The traditional waterfall methodology for the software development life cycle calls for collecting user requirements, transforming them into specifications, and then turning these specifications over to developers," Evelson noted in the report.
"While this approach is often successful for traditional enterprise application implementations, it won't work for the majority of BI requirements," he said.
Increasingly, enterprises can benefit from tapping self-service tools for their BI requirements, he said. While IT needs to retain control of complex, mission-critical BI applications, a vast majority of other BI initiatives need to be handled directly by the business units that will be using the applications.
"We maintain that in an ideal BI environment, 80% of all BI requirements should be carried out by the business users themselves," he said.
The key to success with self-service BI lies in choosing the right tools, Evelson noted. To be really useful a self-service BI tool should enable casual users, technology savvy users and executives to self-serve for new queries, reports and dashboards, he said.
The Forrester report outlines several features that enterprises should look for in self-service BI tools. Some examples include features such as automodeling, data virtualization, search-like graphical user interfaces and collaboration support.
Self-service BI does not, however, mean eliminating IT altogether from BI projects.
"To do it right, IT still has to setup infrastructure, architecture, tools and policies upfront," Evelson told Computerworld by email today.
Many business organizations try to do an end run around IT by having vendors implement self-service BI capabilities. "But that's not the right way, [because] it won't give them access to the entire enterprise data, just what they themselves can connect to," he said.
Sometimes business units try to enable self-service BI capabilities by signing up with hosted providers. But again without IT involvement, such efforts can be somewhat limited in scope, Evelson said.
"It's OK for situations where IT just doesn't have the time, skills, or budgets," he noted. "But again, this'll just give them access to a subset of enterprise data."
Cloud providers aren't selling the real value of the cloud
Cloud providers focus on time to deploy and other tactical claims, not the core strategic value
I hear this pitch all the time: "Cloud computing provides the shortest time to deploy or time to market because there is no need to purchase and configure hardware and software." That makes sense.However, the value that comes from speedy deployments is often lost in the process that occurs in most Global 2000 companies as they allocate resources, understand compliance, and deal with security. The advantage of not purchasing hardware and software is significantly diminished, considering the amount of work required to move or create a system wherever it may be sourced. The cloud providers are emphasizing a small advantage of the cloud.
[ In the data center today, the action is in the private cloud. InfoWorld's experts take you through what you need to know to do it right in our "Private Cloud Deep Dive" PDF special report. | Stay up on the cloud with InfoWorld's Cloud Computing Report newsletter. ]
If the value of time to deploy is not the big deal we're told it is, what is the compelling reason to move to cloud computing? You might think it's the efficiency of the public cloud platforms, but that too is a relatively small advantage.
The big advantage is the ability to quickly align with changing requirements, an area where traditional approaches to IT have failed for the last 20 years. In fact, they're getting worse at it.
The trouble is that the value of adaptability, which far exceeds that of other benefits of cloud computing, is both difficult to define as a concept and even more challenging to model for a specific problem domain or a whole enterprise. Nonetheless, it should be the ultimate objective of cloud computing and -- for that matter -- any new technology.
Cloud providers should stop leading their pitches with the tactical values that vary greatly from enterprise to enterprise and instead discuss the core strategic reasons for moving to the cloud. For its part, IT needs to get a clue about this concept so that it can apply cloud computing technologies in the right directions. My fear is that both providers and enterprises don't yet understand the true value of cloud computing, and tactical "quick win" thinking will get us into trouble -- again.
No comments:
Post a Comment