Skip to content Skip to navigation

Confident Memory Management on Embedded Devices

Embedded software applications face many challenges that are not present on desktop computers. A device with a dedicated function is not generally regarded as a computer, even if a significant part of it is software. Users will put up with occasional slowdowns and crashes on a desktop computer, but devices are held to a higher standard, especially when they are part of a mission-critical system. Memory allocation is an important factor for providing the necessary performance and reliability on an embedded device.

Storing, organizing, and sharing data makes up a large part of the memory requirements for an application. A device application can use an embedded database library to manage memory more effectively, by both imposing bounds on memory usage and analyzing worst-case behavior in a consistent way. The database library can handle all the details of reading, writing, indexing, and locking data within a predictable footprint, so that the application's own memory requirements are greatly reduced.

Design Considerations for Predictable Memory Usage

Reliable embedded devices depend on predictable behavior. For memory allocation, this requires knowing how much memory an application will need in the worst case, and then finding ways to reduce that amount. To do this, an application needs to follow a good memory allocation strategy, measure memory usage under a variety of representative configurations, and analyze the results.

Total memory usage includes not only the memory requested by the application, but also the overhead of the dynamic memory allocator itself. Some allocators are more susceptible to fragmentation than others, so it is important to know what kind of allocator the application is using.

Two-Phase Memory Allocation

Read the Full White Paper

Two-phase memory allocation is a useful strategy to avoid memory fragmentation. Large and long-term objects are allocated first so that they are guaranteed a place in memory. Small and short-lived objects are allocated in the second phase because they are less likely to fail even if memory is fragmented. In this way, there is little risk that an allocation will fail merely because no contiguous region of memory is large enough.

Applications should also avoid recursion, which is difficult to analyze both for performance and memory requirements. Limited recursion may be appropriate if it is sufficiently bounded, however.

This strategy should be applied throughout the application, both in the application code itself and any libraries that allocate memory. An application that follows this strategy will have a consistent memory profile under any memory allocator.

Collecting Statistics

When measuring memory allocation behavior, the most important statistics to collect are the largest amount of memory allocated at any one time and the size of the single largest allocation. Overhead from the memory allocator should be included in these statistics, if available. Other statistics may also be valuable for certain memory allocators.

The amount of memory used by an application usually depends on how it is configured and how it is used. Statistics should be collected for several different configurations that represent all of the extreme memory use cases. The application should also be divided into discrete operations that can be tested individually, so that results can be calculated without simulating all possible combinations.

Analysis

When an application runs out of memory, it is very difficult to continue working because it can happen at any point in the program. Therefore, it is necessary to calculate the total amount of memory required to run an application without any allocation ever failing.

The first phase of allocations should always be the first operation performed by the application. Because this allocation is static, it can be subtracted from the peak allocation of every other operation and counted only once.

Provided operations run sequentially, one by one, the memory consumption is defined as the largest consumption of any individual operation. If operations could overlap, the maximum memory consumption is defined as a sum of all the operations that could be run concurrently.

Knowing this, when combined with the statistics collected memory allocation, it is possible to calculate the maximum memory bounds for an application.

Selecting a Dynamic Memory Allocator

Most operating systems use a general-purpose memory allocator. All applications share a large pool of memory, though memory protection is usually employed to partition memory into pages. Fragmentation behavior varies from one operating system to the next, and worst-case analysis on one platform may not be applicable on another.

A custom memory allocator can be used to bound certain allocations in a predictable way. For example, a class of allocators known as "buddy allocators" will guarantee certain worst-case behavior based only on the peak allocation and greatest individual allocation size. Such an allocator is best applied to each application separately, so they can be analyzed separately. In this way, each application's total memory requirements can be satisfied when the application is loaded.

Memory Allocation with an Embedded Database

Memory allocation behavior can have a significant impact on the performance and reliability of an embedded device. Extreme measures such as allocating all memory statically at compile-time are extremely restrictive, and not necessary if developers are willing to apply some analysis. An embedded database that provides robust memory management features, like ITTIA DB SQL, can help with this analysis.

Embedded developers should be careful when using any library that performs significant memory allocations, as analysis can be very difficult when variations in behavior are unpredictable. ITTIA DB SQL is carefully designed with predictable, documented memory allocation behavior, so that developers can be assured that statistics will be consistent each time an operation is run.