By Joannes Vermorel, March 2020ERP (Enterprise Resource Planning) refers to a class of enterprise software that supports the routine operations of the company and tracks its resources, most notably cash, raw materials, work in progress, end products, client orders, purchase orders, and payroll. ERPs typically include many modules intended for core business functions, i.e. accounting, procurement, distribution, finance, sales, … and offer a tight integration within those modules to facilitate the flow of the (transactional) information between functions. ERPs are built on top of relational databases, and typically feature a very extensive design: a large number of entities (e.g. products, clients, invoices, etc.), numerous screens and a high degree of configurability.
Origin and motivation
During the 70’s, it gradually became obvious that electronic records were presenting multiple advantages when compared to the traditional paper trail. Electronic records were becoming cheaper, faster and more reliable than paper records. Two inventions that predate ERPs were key to empowering electronic records: barcode readers (1950’s) and relational databases (1970’s). Barcode readers offered a superior way to manage flows of goods, increasing productivity while reducing clerical errors. Yet, while barcode readers had dramatically improved data acquisition in many situations, storing, organizing and processing electronic records remained somewhat of an open problem for two decades. Relational databases were the software industry’s response to this problem in the late 70’s and five decades later, relational databases remain the dominant practice when it comes to business data management.
However, naked relational database systems, as typically sold in the early 80’s, proved to be very expensive to implement, as every company was re-inventing how to represent everything in its database: products, clients, invoices, payments, etc. Thus, during the 80’s, a series of software companies emerged, selling “pre-configured” relational systems. These products would later be collectively referred to as ERPs, a term coined in the 90’s. Unfortunately, the acronym ERP is a misnomer; it should have been Enterprise Resource
Management rather than Planning. Indeed, planning is - at best - a secondary concern for ERPs. As detailed in the following, predictive analytics are essentially at odds with the core design of ERPs.
Historically, ERPs gained traction because they streamlined series of operations that previously required extensive clerical efforts. For example, issuing a purchase order to a supplier required filling a form with the supplier name and address. The order lines had to only include valid product references. Then, upon reception of the goods, the received quantities had to match with those found in the original purchase order, and once the delivery was deemed as
conform, a payment order had to be generated against a template with the proper amount, the date matching the supplier’s payment terms and correctly identifying the supplier’s bank account number. All of those can be found in the ERP and coherence checks can easily be made in an automated way.
The ERP market experienced a rapid growth in the late 90’s that was primarily fueled by the steady progress in computing hardware (processor, memory, storage), which had become accessible to companies of all sizes.
In the 90’s, ERPs became the software core of most large companies when the business primarily revolved around the flow of goods. In contrast, companies dominantly geared toward services usually adopted a CRM (Customer Relationship Management) software as their core. ERPs and CRMs share many attributes, including their shared design on top of relational databases. ERPs typically adopt a transaction-centric perspective on most of their features, while CRMs adopt a client-centric perspective.
The design of ERPs
ERPs are diverse, as the market includes many vendors that have kept pushing many versions of their ERP products over multiple decades and yet, at the core, most implementations still follow highly similar design lines. ERPs emerged as “pre-configured” software layers implemented on top of relational databases. Thus, the ERP design process involves cataloging:
- entities, that is, all the concepts or objects that the ERP needs to represent, such as products, payments, stocks, suppliers… Each entity may involve one or several tables in the underlying relational database. Most ERPs define hundreds of entities, and up to thousands for the most complex ERPs.
- user interfaces which let end-users view and edit the entities' data. These interfaces are dominated by the CRUD design (Create / Read / Update / Delete), which represents the most basic operations that end-users expect when dealing with entities.
- business logic which provides automated behaviors that remove the need for many clerical tasks, which can be automated based on well-defined rules, such as converting client orders into client invoices.
As businesses are incredibly diverse and complex, there is a never ending stream of need for newer or refined entities, user interfaces and business logics that are required to cover evolving business practices. This represents a massive ongoing effort from ERP vendors. Then, this challenge is compounded by all the ambiguities that arise when attempting to serve very diverse verticals. The same term, like “payment”, reflects very different realities and processes from one vertical to another. In software, complexity tends to have nonlinear costs, i.e. supporting 2x more features in a product costs a lot more than twice the original cost.
As a result, ERP vendors have adopted a series of strategies to make their software investments more competitive.
Technology
In order to cope with the complexity, the first strategy leveraged by ERP vendors consists of developing technologies with the explicit intent to lower the cost associated with dealing with the complexity.
In particular, many ERP vendors engineered DSL (Domain Specific Programming) languages that are intended to supplement the underlying query language - typically a flavor of SQL nowadays - of the underlying relational database. Through a well-engineered DSL, extending the ERP (i.e. newer entities, interfaces or logic) to cover the specificities of a given company can be done with less resources compared to undertaking the same effort with a general purpose programming language.
Since the 2000’s,
open source ERP vendors - who emerged leveraging the open source relational databases that had became available in the late 90’s - did usually adopt an alternative plug-in strategy (instead of DSLs), where the ERP is designed to be extended through the introduction of custom code, tailored for each client company, that is written in the same general purpose programming language as the ERP itself. This strategy is lighter to execute for the ERP vendor (no DSL to be engineered), and aligned with the ERP’s open source nature.
Offering
As the cost of implementing and supporting features grow with the overall number of features, most ERP vendors adopt a module-based pricing strategy: features are bundled into “modules”, which focus on distinct functional areas within the company: inventory management, finance, payroll, etc. The ERP gets sold on a module basis, letting the client companies cherry-pick their most pressing modules, and delaying the other ones for later investments.
The module-based pricing strategy also gives the ERP vendor a natural upsell strategy where the existing customer base becomes the prime target for the future sales. Business areas that had not been initially covered by the ERP’s original set of modules get new dedicated modules that can, in turn, be sold to client companies.
This module-based pricing strategy is an effective mechanism to cope with complexity because it gives direct feedback to the ERP vendor about the functional areas where the willingness to pay is the greatest. As such, it helps the vendor to correctly prioritize its software engineering efforts.
Ecosystem
Each extra feature added to ERP tends to yield diminished returns for the ERP vendor who has correctly prioritized its software engineering efforts (1). Indeed, if this feature hadn’t been added before, it’s probably because it wasn’t concerning enough companies in the first place. Then, organizations themselves - ERP vendors included - tend to be subject to diseconomies of scale as well: adding extra software engineers to a software product is notoriously known for not yielding linear gains in the throughput of improvements brought to the product.
Thus, most ERP vendors adopt an ecosystem strategy to delegate those
last mile development efforts needed to get the ERP operational for a given company to third party companies, typically known as
integrators. Those integrators charge client companies for the implementation and roll-out of all the capabilities that aren’t offered by the “raw” ERP. Historically, up to the 2000’s, when companies adopted ERPs for the first time, the work of integrators was usually centered around the introduction of extra capabilities for the ERP. Nowadays, however, most ERP projects are
upgrades, as a
legacy ERP is already in place. Thus, the primary added-value of the integrator is to ensure a smooth transition from the old ERP to the new one. In practice, this work entails migrating data and processes between systems.
Unlike ERP vendors that have a business strategy primarily geared to the ERP itself, treated as an IP (intellectual property) asset, integrators usually adopt some flavor of man-day charges. They bill their work based on the amount of days worked and the IP of the work delivered usually falls, by contract, to the client company itself.
Developing a diverse ecosystem of integrators, both in geographies and in verticals, is an effective way to mitigate the inherent complexity associated with ERP development. Nearly all large ERP vendors have developed sizable networks of integrators.
The limits of ERPs
ERPs inherit most of the limitations of their underlying relational database systems (2). Then, further limitations arise from the complexity-mitigation strategies that we just described in the previous section. These limitations are particularly interesting because they are
design limitations, and thus, they are unlikely to be addressed by newer ERP versions, or rather solving those limitations will likely involve denaturing ERPs to the point where it would not make much sense anymore to still refer to those software products as ERPs.
Adverse to coarse reads or writes
The relational databases used by ERPs deliver the ACID (Atomicity, Consistency, Isolation, Durability) properties with a design extensively geared toward a workload that involves dominantly small read and write operations, which should be performed with low latencies - typically a fraction of a second - with reads and writes being roughly balanced in volume. A detailed analysis of relational databases goes beyond the scope of the present article, but this observation concerning the
intended workload for ERPs explains, on its own, many poorly understood limitations of ERPs.
Due to their design based on relational databases, ERPs are largely unsuited for analytics, statistics and reporting whenever the amount of data is nontrivial. Accessing a nontrivial amount of data in any ERP - while business operations are ongoing - is
always a problem, because when the system gets starved due to too many data reads or writes, the system slows down. In practice, this means that mundane operations, such as processing barcodes, slow down as well. In the worst case, the whole company grinds to a halt. Thus, every operation in the ERP, read or write, needs to remain small, ideally “trivial”. The amount of data that can be considered non trivial has dramatically increased over the last four decades, along with better, cheaper computer hardware. However, companies did take advantage of this progress to vastly expand the amount of data poured into their ERPs as well. As a result, present day ERP systems are typically not noticeably faster than they were two decades ago.
For example, ERPs are well suited to display the stock level of a
SKU, or to update its value when a unit is picked or loaded, but ERPs are typically not suited to display the consolidated daily timeline of the stock variations of a SKU over the last three years. The entire Business Intelligence (BI) segment of software products emerged in the 90’s as the industry-wise response to the analytical limitations of ERPs (and CRMs alike actually). Unlike ERPs, BI tools are engineered around
in-memory hypercubes, usually referred to as OLAP (Online Analytical Processing). By giving up the ACID properties, OLAP systems get dramatically more efficient hardware-wise than relational databases to deliver aggregate statistics, such as sales per day, per store, per category, etc.
It is intriguing to note that most ERP vendors did not appear to be fully aware of this design limitation in their own products, even those that emerged
after the 90’s, when the BI segment was the living proof of this state of affairs. For example, most ERP vendors ventured into demand forecasting features which are, by design, completely at odds against their underlying relational database - even more so than plain
reporting features. Forecasting requires access to the entire history of the company’s transactional data: sales, returns, stock levels, discounts, etc. While the calculation is typically intended to be batched during the night, hence mitigating the resource starvation issue mentioned above, relational databases entail vast accidental overheads when attempting to carry out this type of calculation. As a result, nowadays, most ERPs do feature “legacy” forecasting capabilities (3) that fell into disuse years, and sometimes decades, ago.
Adverse to new or distinctive paradigms
The entity-cataloging strategy used by ERP vendors does not scale linearly in terms of complexity management. Better tools, as discussed above, only bring a linear relief - with respect to the number of entities - while complexity costs grow superlinearly. As a result, new or distinctive paradigms prove usually expensive both in costs and delays to integrate, frequently reaching the point where it’s a better alternative to skip the ERP altogether. These situations are diverse, we will list a few of them below, but they all similarly boil down to paradigms that are difficult to integrate because they came
late in the ERP development process (4).
Achieving
zero operational downtime is difficult because, as pointed out above, any large read or write puts the ERP at risk of a system slowdown. Thus, all those operations are typically performed as nightly batches, when there are little or no operations in progress. Yet, this design is at odds with ecommerce requirements that emerged relatively late in the history of ERPs. As a result, most ERP vendors did extensively separate their ecommerce module, frequently leveraging a separate database system, breaking the implicit design rule that all the ERP modules live in the same database(s). As a result, ERP-backed ecommerce modules tend to be as difficult and expensive to roll-out as third-party apps.
Dealing with
remanufacturing verticals where goods follow complex cyclical flows (i.e. repairs), while mainstream ERPs are heavily geared toward
forward flows - from producers to consumers - as commonly found in FMCG and distribution. While most of the relevant entities necessary for remanufacturing purposes (i.e. products, stocks, orders) already exist in ERPs, their designs are typically completely at odds with the requirements of remanufacturing. It is frequently easier to re-implement entirely those entities rather than trying to recycle those of a mainstream ERP in those verticals.
Delivering
guaranteed low latencies, say below human perception (i.e. below 50ms), is difficult because neither the ERP nor its relational database have been designed with such requirements in mind. Yet, piloting highly automated systems like robotic warehouses require these sorts of latencies in order to avoid the software-driven orchestration becoming the system’s bottleneck. Thus, in practice, most areas associated with “real-time” (4) processing have dedicated systems outside the ERP.
Interestingly, there are specialized ERPs - they don’t always refer to themselves as ERPs though - that took alternative development paths in order to precisely cope with those situations. These ERPs are typically targeting niche verticals that are not properly served by mainstream ERPs. However, these alternative development paths are typically at odds with mainstream requirements.
Risks in ERP transitions
While it may seem like a paradox, it is typically vastly more difficult to
upgrade an ERP rather than just
deploy it for the first time. Upgrades are notorious for being multi-year processes. Even mere
version upgrades of the ERP - keeping the same vendor - usually prove to be difficult, many-month undertakings. Anecdotally, this difficulty regularly makes the news as large companies publish press releases indicating that their ERP upgrade efforts have run over budget in excess of hundreds of millions of dollars or euros. In such situations, the ERP vendor, the integrator and/or the company itself gets blamed. Yet, it seems that most fail to acknowledge that such problems are intrinsic with
ERPs themselves, as designed through the market forces listed above.
Complexity costs do not scale linearly but super-linearly. When a company adopts an ERP for the first time, it has the luxury of gradually adopting each module, one module at a time or so. Thus, the number of entities / interfaces / logics involved with each iteration is relatively small, and bespoke extensions can be gradually delivered to make the ERP suitable to the needs of the company.
However, when the company has to transition to another ERP, it then faces the problem of transitioning
all the modules at the same time. Indeed, ERPs are, by design and by economic forces (5),
software monoliths built on top of a small set of databases. Thus, when the new ERP has to be deployed, the
incremental adoption path used for the original ERP cannot be replicated. The company has therefore to transition in an all-or-nothing manner. As a result, upfront implementation costs tend to be very steep, while the post-deployment status tends to be half-broken situations that take up to several years to fix.
Technically, these upgrades are always difficult to implement because importing the entities / interfaces / logics between the old and the new system is not a one-to-one affair. The semantic of the entities evolves over time. For example, the ERP vendor might have started with a table named “Orders” that was intended for client orders. However, down the road, the vendor has to address the newer requirements, not originally planned, of managing client returns. The next version of the ERP recycles the “Orders” table for the client returns, however those lines now have
negative order quantities. This seemingly innocuous change ends up vastly complicating the migration from the old ERP version to the new one.
However, upgrading toward a new ERP, or toward a later version of the same ERP, isn’t the
only option for companies. Multiple alternatives are actually available to de-risk the transition. Naturally, the whole ecosystem of ERP vendors and integrators have a vivid financial interest in promoting the opposite, as a matter of survival of their economic model.
Beyond the monolith
ERPs emerged as a
software monolith, that is, software products where all inner components are tightly coupled - a necessity in order to ensure a
plug-and-play roll-out of all the modules. Furthermore, until the 2010’s, building distributed systems - i.e. software that operates over a fleet of machines rather than operating over a few central machines - was significantly more difficult and costly (6). Thus, to a large extent, ERP vendors (most of them being decades old) did not have any alternative but to build their products as a monolith.
Nevertheless, as cloud computing became mainstream, tools and libraries - frequently open source - became more popular and more accessible. As a result, the costs and overheads associated with the design of distributed applications have been steadily decreasing over the past decade. The software industry has started to extensively revisit how the added-value that was historically only delivered by ERPs (or ERP-like software) can be delivered.
The modern approach to enterprise software consists of breaking the monolith through a series of smaller apps that, ideally, do one thing and do it well (7). Glueing the apps together is done through APIs (Application Programming Interfaces) that are precisely intended to facilitate bespoke integrations into diverse applicative landscapes. Most APIs are designed to leverage popular open source API tooling that significantly lowers the development costs associated with those bespoke integrations.
Those apps sometimes have steeper upfront integration costs than built-in ERP modules. However, they present sizable long term benefits as they vastly de-risk further evolutions of the applicative landscape. The company gains the option of upgrading or replacing one app at a time, which proves not only much easier to execute, but cheaper as well while involving less risks. Finally, SaaS (Software as a Service) apps typically opt for a continuous delivery of small incremental changes. While this pattern generates an ongoing - but limited - workload for IT teams, the SaaS change process is more organic, cheaper and less risky compared to yearly or biennal upgrades.
Beyond relational databases
Relational databases have been the
de facto backbone of ERPs since the 80’s. However, since the 2010’s, compelling alternatives have emerged. The most notable one is probably
Event Sourcing (ES) coupled with
Command Query Responsibility Segregation (CQRS). This approach offers superior scalability and latency while also allowing more expressive designs, i.e. capable of more narrowly capturing business intents in various situations.
The gist of
event sourcing consists of treating every change in the system as an immutable event. The immutability angle is inspired by accounting practices: when a line proves incorrect in a ledger, the accountant does not erase the line to fix the problem, instead he adds another corrective line in the ledger. Keeping the entire history of events - assuming data storage is cheap enough to make this strategy viable - yields numerous benefits: better traceability, auditability, security... Furthermore, the immutable angle makes event-driven systems easier to scale. Data can be extensively copied and replicated without having to deal with changes in the data.
The
CQRS design acknowledges that, in most systems, the respective volumes of
reads and
writes are highly imbalanced. In many situations, the volume of (data) reads outsizes the volume of writes by several orders of magnitude. However, relational databases are geared toward (somewhat) symmetrical volumes of reads and writes. CQRS explicitly segregates the responsibilities for reads and writes, typically in two distinct subsystems. This design also yields benefits, mostly in terms of latency, scalability, and hardware-efficiency.
Yet, while both ES and CQRS are already popular in large consumer-oriented tech companies and in quantitative trading (finance), they have only very recently started to gain traction in mainstream enterprise software. As a result, the ES+CQRS tooling is still nascent compared to its counterparts in the realm of relational databases. Nevertheless, this approach offers ways to not only steeply reduce hardware costs, but also to compress latencies that frequently remain an acute problem for most ERP implementations.
Lokad’s take
As ERPs are not even suitable for analytical purposes - hence the need for BI (Business Intelligence) tools - it should not be a surprise that their track record is dismal whenever
predictive analytics are involved. By way of anecdotal evidence, none of the machine learning / data science revolutions happened in ERPs, and when facing those classes of requirements, teams invariably fall back on spreadsheets or
ad hoc scripts.
Lokad itself emerged as a specialized app intended to supplement - not replace - ERPs as an analytical layer dedicated to the predictive optimization of supply chains. Most of our core features such as
probabilistic forecasting capabilities, intended to support mundane decisions such as stocks / purchasing / production / assortment / pricing, are downright impractical to implement inside ERP systems.
Notes
(1) This view is somewhat simplistic. In practice, during the last few decades, software engineering has leapt forward along with raw computing resources. As a result, capabilities that would have been exceedingly costly to develop in the 80’s might have become vastly cheaper a few decades later. ERP vendors do reprioritize their development efforts taking this phenomenon into account.
(2) Historically, the very first ERPs of the 70’s, or rather ERP-like pieces of software as the term ERP would only arise later, relied on crude flat-file databases. Relational databases emerged as a superior alternative to flat-file databases in practically every angle. Thus, the early ERP vendors upgraded their design toward relational databases. However, as far as whole-database batch data processes were concerned, flat-file databases remained practically superior - given the same investment in computer hardware - until the columnar flavor of the relational databases became popular in the 2010’s.
(3) As many ERP vendors did attempt to build and deliver forecasting
features, database vendors, in turn, did attempt to build and deliver forecasting
capabilities in their systems as well. To my knowledge, those native “forecasting” capabilities of databases are both widespread and largely unused (or heavily compensated in manual ways with Excel spreadsheets), preserved only for backward compatibility by vendors.
(4) Real-time processing is a relatively subjective terminology. Technically, the speed of light itself puts hard limits on what latencies can be achieved with distributed systems. The whole point of having an ERP is to coordinate suppliers, plants, warehouses, … which are geographically dispersed.
(5) The whole selling point of having a per-module pricing strategy is that adding a new module is a (near)
plug-and-play experience for the client company. However, the price to pay, design-wise, for this ease of adoption is a heavy coupling between the modules, i.e. a monolithic design. Touching any module impacts many other modules.
(6) Distributed computer systems have been around for decades. However, until
cloud computing became mainstream around 2010, nearly every single piece of enterprise software was built around the client-server architecture, which centralizes all processes and data into a handfew machines, typically a front-end, a back-end and a database. In contrast, cloud computing entails
fleet of machines, both for reliability and performance purposes.
(7) This was originally the Unix design philosophy. Post-2010 specialized enterprise apps are typically not as narrow and focused as Unix tools. Yet, these apps are already one or two orders of magnitude less complex than ERPs. Also, this approach should not be confused with microservices, which is a way to internally engineer the apps themselves.