Other than the cutting edge of computing, exactly what is “edge computing?” You may have heard this term thrown around, but as devices get smarter and smaller and wireless connections become faster, edge computing is likely to become more and more important.
This article will define edge computing and its similarities and differences with cloud computing, as well as who uses the technology and how.
What Is Edge Computing?
Essentially, computing can happen on the device, like with a calculator, or over the internet, like most of what you do on your phone or computer.
Computing that takes place off of the device, over the internet, is usually facilitated through the more familiar cloud computing.
Cloud computing is computing carried out by a network of connected servers in a data center. You access this network via an internet-connected device that doesn’t contribute itself to the task of computing.
Edge computing is essentially a form of cloud computing in which computing is distributed across devices rather than in one location, on what is known as an “origin server” in cloud computing.
In fact, “Edge Cloud Computing” recreates a cloud-like system using “edge servers” or “micro-servers” instead of origin servers.
While edge cloud computing works very much like regular cloud computing for the end-user, edge devices share the computing task with servers.
Why Is Edge Computing Important?
Edge computing is important in modern and next-generation devices because it is more reliable and secure than cloud computing. It is also more powerful and versatile than computing strictly on the device.
Edge Computing Allows for Smaller, Faster Devices
Most users have developed a craving for both smaller and more powerful devices. Because cloud computing involves networks of computers, it’s always going to be more powerful than any device that most people could reasonably own.
Cloud computing solves the device size problem. However, we also want computing to be fast.
When you use cloud computing for word processing, it might feel instantaneous. In reality, transmitting data from a device to a cloud and back does take time, but word processing is helped by being a low data requirement activity.
With cloud computing tasks with high data requirements, like streaming games or watching media, you’re more likely to notice a drop off in performance. You’ll notice the performance drop off even more if the cloud service is in high demand at the time.
Most edge devices split up the computing load. Elements that don’t change often or very fast are processed on the device. Elements that change rapidly and require more processing power are processed on the cloud.
In this way, some of the processing demand is taken on by the device rather than everything happening on the cloud. Fewer data requirements on the cloud mean faster processing on the same internet connection.
Edge Computing Adds Security
Any data that is processed on the device doesn’t need to be sent to the cloud. Any data that doesn’t need to be sent to the cloud is safer from potential data thieves.
That the cloud itself is unsafe is a common cloud computing myth. However, any connection to the internet is a potential opportunity for hackers. Just like the old Wild West’s bank robbers might attack the coach rather than the bank, whether or not the cloud itself is secure isn’t necessarily the problem.
Edge computing allows data to be split between the device and the cloud to speed things up. But edge devices also allow data processing to be split between the device and the cloud so that sensitive information never leaves the device.
Furthermore, in the case of cloud edge computing, outages are less likely for users because maintenance can be done or damage can occur to micro-servers or edge servers without all users of the network being affected.
Are There Downsides to Edge Computing?
There are downsides to edge computing. Some of those downsides come from edge computing also using the cloud. For example, edge devices still need to have an internet connection for maximum utility. However, edge computing technology poses some of its own problems as well.
Right now, edge devices require fairly specialized computer chips. As a result, most edge devices can only really apply edge computing to one thing. They aren’t necessarily single-use, but they also aren’t as versatile as strictly cloud devices.
Who Uses Edge Computing?
Right now, edge computing use-cases are fairly limited. The technology is only employed by companies that have a really good reason not to rely strictly on onboard or cloud computing.
Cellnex Telecom is a wireless telecommunications operator that serves most of Europe. By employing edge cloud computing, which distributes computing to multiple locations rather than relying on a data center, the company offers better and more reliable service across its vast market and dispersed userbase.
Perceive creates chips for edge devices, primarily smart home security devices. These chips allow the devices to understand images, video, and audio while limiting the volume of potentially sensitive data they have to send to the cloud. Similarly, companies like Microsoft use edge computing in IoT devices that are less cloud-dependent.
AT&T promises that edge computing will make cloud gaming faster and more accessible in the future. Games require more data to stream than other forms of media because gaming requires reacting to user input. Processing some commands or distributing graphics rendering may reduce connection requirements and latency.
Are You Living on the Edge?
Depending on how you use connected devices, you might already be using edge computing solutions at work or in your home. Smart home devices will most likely be how most people first encounter edge computing for some time.
However, as edge computing makes devices smaller, faster, and more powerful, this technology’s applications are only likely to become more ubiquitous.
Image Credit: Geralt/Pixabay
About The Author