Click to change the region
Big Data - Data Highways for Intelligent Machines Big Data - Data Highways for Intelligent Machines

Data Highways for Intelligent Machines

Data Highways for Intelligent Machines

Big data

With Industry 4.0, data volumes are growing at a dizzying rate. A fast, powerful, and resilient infrastructure is therefore crucial for the success of the vision.

Only a comprehensive expansion of broadband fiber optic networks can meet the requirements of the IoT. Many see the new 5G mobile communications standard as a decisive impetus towards a strongly networked future.

Large Bandwidths

are required for networked systems

Above all else, the fourth industrial revolution means one thing: data, data, and even more data. In a study published in November 2018, the IDC analysis institute assumes that around 175 zettabytes of new data will be generated in 2025, compared with 33 zettabytes in 2018.

You may recall that one zettabyte corresponds to one sextillion bytes or one trillion gigabytes.

For a long time, private consumers were the largest producers of data, but their share of the “global datasphere” continues to decline. Analysts assume that in six years around 65% of new data will come from companies. One reason for this is the constantly growing number of sensors and “intelligent,” communicating machines. Ninety zettabytes alone are expected to be generated in the IoT – a considerable proportion of which in industry.

If IDC’s forecast is correct, around 30% of the data volume in 2025 will be real-time data processed within milliseconds. This is a crucial prerequisite for many projects currently planned in the future; the entire vision of Industry 4.0 depends on it. The two biggest challenges are as follows:

  • Computing power.
    Computers must process large amounts of data in the shortest possible time.
  • Bandwidth.
    The generated data must be transmitted without delay.

Solutions already exist in both areas, but much remains to be done before they can be implemented across the board.

Edge Computing

When important computing operations are carried out to where the data is generated

Computing Power Shifts to the Edge. In traditional IT infrastructures, central data centers do by far the largest part of the work. This is where data is processed, managed, secured, and retrieved as required. In some cases, these nodes are far away from the end devices where the data is generated and required. In the cloud, several data centers that are located far away from each other often work together. These structures function excellently in classic data processing. With Industry 4.0 and the IoT, however, they are reaching their limits:

The Challenge. The “intelligent” machines of the smart factories continuously produce vast amounts of sensor data. Many of them are crucial for ongoing operation and must be evaluated and processed immediately. After that, they are no longer needed and are “thrown away” immediately. Other information is analyzed and archived more extensively in order to use it in the medium or long term. Particularly in the first case, latency times must be avoided, which inevitably occur during transmission to and from the data center.

The solution is edge computing. Important computing operations are carried out where the data is generated – in the machine or even in the sensor itself. “Mini-computers,” so-called programmable automation controllers (PACs), decide on site which data is forwarded to the data center and which data is needed immediately.

For example, current sensor data is compared with reference values. If everything is in the green, they are deleted immediately. Only the information “everything is okay” is forwarded to the server or the cloud.

Fog-achritecture. In so-called fog architecture, a further level between end devices and the cloud manages computing and transmission capacities in order to use resources optimally. In this way, edge computing reduces the amount of data transmitted over the networks. This keeps latency times low, reduces the load on data centers, and ensures trouble-free operation of the smart factory.

Fiber Optic Cables

Data Transfer at the Speed of Light

Even though the flood of data has been filtered and reduced by edge computing, enormous computing power is still required in the cloud. The basis for this is always a powerful infrastructure. Regardless of how the individual machines of a smart factory transmit their data, the bandwidth for a smoothly functioning Industry 4.0 environment can only be provided by fiber optic cables.

To set up a functioning network in extensive industrial facilities, several kilometers of cable are required fast. The classic copper cables from Industry 3.0 no longer suffice because they can transmit a data signal unamplified for only about 100 meters.

Depending on cable type and wavelength, bandwidths of 10 Gbit/s can still be achieved over 40 km with fiber optic cables. In order to convert existing facilities to Industry 4.0, companies must therefore think above all about upgrading their cable networks. It is not enough to simply replace copper cables with optical fibers.

A comprehensive network architecture with appropriate redundancies and bridging mechanisms is required to ensure that local disturbances and failures do not impair the overall functionality of the production facilities.

New Mobile Ratio Standard

as Starting Signal for Industry 4.0

Experts agree that it is not enough to upgrade some company sites and production facilities. For example, a large electronics group has already linked locations worldwide to allow test results from Asia to be directly implemented in the production processes of a facility in Germany.

Even large powerful corporations cannot implement such cross-location networks on their own. They only work if the public networks provide the necessary bandwidth across the board. Even those who follow media coverage only irregularly know that a lot still has to happen in the rural regions of Germany in order for companies located there to also benefit from Industry 4.0.

New impetuses are expected from the new 5G mobile communications standard, the roll-out of which will begin in the near future. It was developed with IoT in mind and offers around 100 times more bandwidth than LTE, high availability, and low response times. Latency times of less than one millisecond have already been achieved in the laboratory. However, this speed comes at the expense of the range; therefore, considerably more transmitter masts will be required in the future.

In order to connect these masts to the backbone network, the fiber optic infrastructure must also be further expanded. In order to push ahead with a fast and nationwide roll-out, legislators have enacted very ambitious requirements; however, they are already being contested by the major network operators.

If Industry 4.0 is soon to become a reality everywhere in Germany, there is no way around 5G.

Optical Network Monitoring Systems

Network Security through Real-time Monitoring

The more connected processes and infrastructures are, the more vulnerable they become to any kind of disruption: this can include damage to the fiber itself, errors in the hardware and software of network components, and cyberattacks by hackers and spies. However, fiber optics also offer intelligent solutions for early detection of such malfunctions, which keeps downtimes as low as possible.

At the IT level, performance problems and external interventions must be resolved as quickly as possible. Network performance monitoring and diagnostics (NPMD) systems continuously record and analyze all data traffic. This allows malfunctions to be quickly detected and also subsequently detected and analyzed. Sophisticated NPMD solutions such as Viavi Solutions’ Observer products even allow data traffic analysis at the package level. Complementary software displays monitoring results graphically and clearly and concentrates on a few key performance indicators (KPIs). This means that not only IT experts can react quickly and efficiently to disruptions. The new Observer GigaFlow even offers end-to-end evaluation of the individual data streams in a network.

Optical time domain reflectometry (OTDR) is a proven technology for monitoring physical network structures. A single fiber in the cable suffices for monitoring. The Rayleigh scattering of light pulses is evaluated in order to detect and locate faults and impurities in optical fibers. In this way, extended networks can also be monitored around the clock. In the event of a fault, it can be rectified in the shortest possible time. Manufacturers such as Viavi Solutions offer scalable solutions that can be used at the company level, as well as in large-scale telecommunications networks – from individual OTU units to comprehensive optical network monitoring systems (ONMSI).

Armin Kumpf
What is the right product for you?
Click here


To Monitor Your Network
Observer GigaStor Observer GigaStor
This modular platform offers predefined workflows and dashboard views to help you plan, operate and expand your fiber optic network.
Round-the-clock monitoring of fiber optic networks by OTDR and BOTDR protects against damage and manipulation attempts in critical infrastructures (KRITIS).
Team of experts
You have questions or need our support?
Please call us.
Armin Kumpf
Sales Account Manager / Fiber Optics
Armin Kumpf
64331 Weiterstadt
Tobias Grebing
Sales Account Manager / Fiber Optics
Tobias Grebing
13158 Berlin
Dr. Andreas Hornsteiner
Head of Business Unit Fiber Optics
Dr. Andreas Hornsteiner
82140 Olching
Contact Form
You would like to send us something? You can reach us by phone and by e-mail.

Laser Components

Werner-von-Siemens-Str. 15
82140 Olching

You will be redirected
to the Fiber Technology Website ...