Skip to main content

Edge scale

Exponential growth of data

With the advent of the Internet, there has been exponential growth in data. Our ideas of data have changed dramatically. The last year the Encyclopaedia Britannica was printed was 2010. That final edition consisted of 32 volumes, weighed nearly 130 pounds, and contained approximately 50 million words or 300 million characters. It requires roughly one gigabyte (GB) of disk space to store the entire text of that final volume.

In 1989, a fast connection was 2,400 bits per second. It would take 38 days to download 1 GB. While writing this, my office internet tested at 240 Mbps or 10,000 times faster than 1989. It now takes less than a minute to download 1 GB.

The point of this history is that change has not stopped. Data is still growing exponentially. Exponential growth is a hard thing for the human mind to process. For instance, if you stacked one penny on top of another, and doubled the stack every day, in 38 days it would reach the moon.

From 1990 to 2020, the exponential growth of data was in the cloud. The cloud will continue to grow, but from 2020 onwards, that exponential growth will be at the edge.

The overlooked revolution in software

In 1981, my parents bought me my first computer, a TI 99/4A. It had 16 KB of RAM and a cassette tape player attached for storage. Bill Gates stated,

“I have to say that in 1981, making those decisions, I felt like I was providing enough freedom for 10 years. That is, a move from 64k to 640k felt like something that would last a great deal of time. Well, it didn’t – it took about only 6 years before people started to see that as a real problem.”

Before 2005, enterprise software applications were designed to run on a single computer. As your data grew, you needed a bigger computer. A single server needed more disk space, more RAM, more CPU cores. Servers got bigger and bigger to meet the needs of the largest use cases, such as large banking databases. However, most applications did not need all those resources. VMWare has made billions carving up those big servers into virtual servers (VMs) the size that most applications need.

With Internet growth, data eventually got so big that there was no computer big enough to handle the processing. In 2005, Doug Cutting and Mike Cafarella invented Hadoop, the beginning of what we call big data. Hadoop, and parallel processing in general, makes several individual servers look and act like one big computer. Hadoop took parallel processing out of the university and defense laboratories and into the corporate data center. From then on, all new software breakthroughs, such as Kubernetes and Kafka, were designed to run naturally on multiple servers. In fact, these new frameworks require a minimum of three servers to be fully functional. Parallel processing is everywhere.

In the meantime, however, hardware manufacturers are still making bigger and bigger boxes, as if nothing has changed in the software world. Each year, the processors grow: 64 cores, 72 core, 96 core CPUs. Fifteen years after the revolution in software and the hardware companies have still not changed course.

All of which leads us to a strange place. We take these giant servers and make them into a bunch of smaller servers using virtual machines. Then we lay a framework like Hadoop, Kubernetes or Kafka over all the virtual machines to make the look and act like a single bigger server. It all seems a bit mad.

What not just make small servers?

Thinking in edge scale

“Generals always fight the last war.”

It is human nature to take the solutions that worked for the last challenge and apply them without adaptation to the new challenge. We are seeing this with edge computing. Cloud was the last challenge. Engineers are applying the same solutions that work in the cloud to the edge without considering how very different edge is from cloud.

For instance, rack space is a primary consideration for a data center, so IT professionals place servers in the data center with very large core counts. Knee jerk reaction is more cores the better. When they plan their edge deployments, they attempt to use the same hardware that they use in the data center.

Table 2 Data center thinking vs edge thinking
Data centerData center thinking at the edgeEdge thinking at the edge
Number of locations 1 2,000 2,000
Number of servers per location 100 1 3
Total number of servers 100 2,000 6,000
Number of cores per server 96 96 8
Total cores 9,600 192,000 48,000
Price per server $20,000 $20,000 $2,000
Total price $2,000,000 $40,000,000 $12,000,000

If I have 100 servers in my data center with 96 core CPUs, that is 9,600 cores of compute power. That’s a lot of horsepower. (We will giggle about that in 20 years.) If a server cost $20,000 each, then it is $2 million for 100 servers.

Now suppose I have 2,000 stores and I try to deploy that same server. That is 192,000 cores of compute power. It is complete overkill. It would cost $40 million. Plus, it’s only one server per location. I have no high availability.

If, however, I deploy 3 servers of 8 cores each, that is 48,000 cores of compute. Note that it is five times more compute power than the data center. If each server is $2,000, then the total cost is $12 million. As my compute needs at each store grows, I can incrementally add more 8 core servers.

Once you begin to think in terms of edge scale, you realize small servers are key.