Real Benefits of an In-Memory Database
What is an in-memory database? Simply stated, it’s a database that is held in computer memory rather than in a traditional location such as disk storage. For most business applications, the time needed to store and retrieve data is the biggest factor in determining how quickly tasks can be completed – and while a few milliseconds to retrieve data from a disk may seem trivial, when multiplied by thousands or millions of records these delays can seriously affect system speed and performance. When the data is already in memory, access is almost instantaneous.
Businesses today need real-time information – in other words, they need to know what is happening now, not what may have occurred yesterday or even a few minutes ago. Businesses expect their computer systems to do more and to do it faster than legacy systems so they can keep up with a rapidly changing world. An in-memory database is the obvious answer, but until recently, memory was too expensive and computer systems were not built to efficiently handle the large amount of memory needed. Now, all that has changed. Modern databases (like SAP HANA and applications built to run on it) can deliver the speed and responsiveness today’s business users need. Programs written for in-memory data require fewer instructions that execute more quickly. User queries deliver the information without the delays that are common with traditional database systems.
In-memory databases, also referred to as real-time databases (RTDB), have been around since the 1980s and were used in industries where rapid response times were critical, such as telecommunications, banking, travel, and gaming. Now, affordable memory and system hardware are available to all industries and applications, including ERP systems.
Where are in-memory databases used today?
The emergence of affordable in-memory-capable systems has opened opportunities for faster processing and more responsive systems for business applications of all kinds, not just the high-volume, transaction-oriented systems mentioned above. In-memory databases are ideal for applications that process a lot of data (think advanced planning, simulation, and analytics), as well as for supporting transaction processing where demand is random with large, unpredictable spikes in incoming traffic. And, they are especially good for companies where the data is expanding rapidly, such as:
- Medical device monitoring
- Real-time financial analytics
- Online banking and credit card sales
- E-commerce sites and online auctions
- Real-time market data on new products or offers
- Machine learning for billing and subscriber applications
- Geographic information system (GIS) processing
- Streaming sensor data (IoT)
- Network and grid management
- Advertising results (A/B testing for online ads)
- Interactive gaming
- And more …
Benefits of in-memory
Speed of reading and writing data is the primary characteristic of in-memory data, which enables faster processing and improved response in business applications. But application developers have been quick to realize that this faster response and increased capability are also valuable in allowing the re-design of several other tools and programs that deliver more value. When the database is architected and built from the ground up on an in-memory database, numerous improvements can be made in the design of internal data models and processes.
Data model: A number of different database structures have been developed for legacy technologies to optimize data access for different tasks:
- Data stored in rows (traditional schema)
- Column-oriented architecture, which provides high-volume, fast access response for a limited subset of data
- Special databases for unstructured data, and
- Others that may speed up access in limited use cases or accommodate special requirements.
A modern in-memory database, like SAP HANA, allows all types of data to be stored in a single system, including structured transactions and unstructured data such as voice, video, free-form documents, and emails – all with the same fast access capability.
Faster processing: In-memory databases are faster than legacy databases because they require fewer CPU instructions to retrieve data. Developers can exploit this benefit by adding more function without the accompanying drag on system response. Also, using parallel processing so that multiple subsets (columns) can be processed simultaneously adds even more speed and capacity.
Combined tools: Traditional systems store transaction data in a legacy database that is accessed by online transactional processing (OLTP). Then, to get a view for analytics, the data is often moved to a separate database (data warehouse) where online analytical processing (OLAP) tools can be used to analyze large data sets (or Big Data). Modern, in-memory databases can support both OLAP and OLTP, eliminating the need for redundant storage and the delays between data transfers, which in turn eliminates any concerns about completeness or timeliness of the warehouse data.
Smaller digital footprint: Traditional databases store a large amount of redundant data. For example, the system creates a copy of each row that is updated, and it adds tables of combined data sets that increase space needs and maintenance requirements. In addition to the redundancy avoided for OLAP/OLTP mentioned above, column-oriented databases save changes as they are applied to the database.
Immediate insight: A modern, in-memory database provides embedded analytics to deliver business insight for real-time alerts and operational reporting on live transactional data.
How does a modern, in-memory database work?
It would be inefficient and unnecessary to hold all a company’s data in memory; some information is held in-memory (called hot storage) while other data is stored on disk (cold storage). The hot and cold designations derive from information handling paradigms developed by the cloud computing industry.
Hot data is deemed mission-critical and is accessed frequently, so it is held in memory for fast retrieval and modification.
Data that is more static – in other words, data that is requested infrequently and is not normally required for active use – can be stored in a less expensive (and infinitely expandable) way on disk drives or solid-state devices (SSD). Cold storage data does not benefit from the fast access of an in-memory database, but it is still readily available when needed for less time-critical applications. Cold storage is best for historical data, closed activities, old projects, and the like.
In planning the migration to an in-memory database, the implementation team decides how to sort existing data into cold storage for past requirements and hot storage for ongoing activities. Archiving criteria for keeping the active systems and data in top condition must also be determined.
In-memory database systems are designed with “persistence” for logging all the transactions and changes to provide standard data backup, and system restore. Persistence in modern systems allows them to run at full speed while maintaining data in the event of power failure.
The time to move to in-memory data is now
A modern in-memory database is an important foundational building block for digital transformation. Why? Because a digital enterprise cannot use yesterday’s data to make today’s decisions. Now that in-memory pricing is lower and memory capacity is steadily expanding, an in-memory database is a good choice for enterprises that need real-time insight to thrive in today’s economy.