share article

Share on facebook
Share on twitter
Share on linkedin

Electrifying the UK rail network with high-power electronics

Feature

The political wind has been blowing toward complete rail electrification for decades without any significant progress to date. One in three trains on the UK’s rail network are still diesel-powered − a technology that has been around since 1912 and, whilst having marked improvements in power and efficiency over the years, it is getting tired and needs replacing.

In comparison, electric trains emit 60% less CO2 over similar journeys, with no pollution and being healthier for the environment and people since there’s no exposure to the choking and carcinogenic diesel exhaust fumes. In addition, they offer better speeds, acceleration, reliability and braking power. Sadly, only two fifths of the UK’s 15,847km rail network is electrified, making fuelled trains the only option for the remaining 9,835 kilometres.

This is the fundamental reason that diesel locomotives have continued to be used into the 2020s, even with such obvious and inherent design inefficiencies compared to electric trains.

However, to upgrade the infrastructure to electric is invariably expensive: land must be acquired, buildings planned and built, staff employed and ongoing maintenance performed. This infrastructure must last a long time and make the best use of public funds. Finding a delicate balance between efficiency, efficacy and economy falls to the design engineers who are responsible for both mobile and stationary railway equipment. Likewise, it’s critical that original equipment manufacturers provide electrical components that are optimised in terms of application and cost.

Like a speeding bullet
A 150-tonne locomotive travelling at 90mph carries some 240 million joules of kinetic energy with it, equivalent to about 72kg of TNT explosive. When coming to a stop, this huge amount of energy is simply discarded through the braking process.
These numbers get very large, very quickly. For instance, a diesel express train running from London King’s Cross to Edinburgh might make nine separate stops. Multiplying this by trains running every half an hour, 20 hours a day, the sheer amount of wasted energy becomes staggering, adding up to almost 26 tonnes of TNT, equivalent to 1,200 litres of petrol every day on just one route. That is a lot of discarded energy, bringing the important energy-saving benefits of electrified trains into sharp focus.
Rheostatic, air and friction brakes simply discard the energy as heat, whereas fully-electric trains instead employ regenerative braking, where captured kinetic energy is turned into electricity. This electricity can then be stored locally in battery banks or supercapacitors that hold the charge until it is required, an approach already used by many electric vehicles. However, local storage adds costs to maintenance and construction, and increases the weight of the vehicle.
Alternatively, because electric trains are constantly connected to the power grid, a better approach for large-scale electrified rail is to reintroduce the power to the grid as it’s collected. In well-implemented systems, with trains stopping and moving off simultaneously, any spare energy pushed onto the grid by braking vehicles is immediately harnessed to accelerate another train elsewhere, potentially miles away on the track.

Switching problem
A fully-regenerative, electrified railway seems ideal in many respects. Less pollution is released into the air, trains move off and stop faster, and wasted energy is minimised. Furthermore, heavy mechanical gearboxes can be replaced with compact resistor banks, reducing the number of heavy parts such as diesel pistons, in turn reducing wear and subsequent maintenance on the track and wheels.
A fully-electrified rail network can be more efficient in many ways, but one vital factor that must be considered for the systems’ proper operation is electromagnetic compatibility. For instance, consider the route that the electricity takes through a powered train: The high-voltage alternating current (AC) is fed into the train from the stationary grid through raised wires or a third rail, and is first rectified into direct current (DC). This DC is then inverted back into three-phase AC at the voltage required by the electric motors at that time. Adjusting the throttle simply alters the voltage and consequently the current and power available to the electric motors.
Under braking, the energy simply flows the opposite way. The electric motors generate three-phase AC, which is rectified to DC and then re-inverted to 50Hz AC, which can then be fed back to the grid.
Ideally, the incoming AC would be rectified to a perfectly smooth DC and then inverted back into a perfect sinusoidal signal. However, even with the best and most cutting-edge components, this unfortunately is never the case as flaws will always creep in. In cases with one or two motors, this won’t cause any problems, either locally or to the grid. But, when scaled-up nationally, these problems can interfere and coalesce into significant harmonic distortions, transients and voltage spikes. Sharp voltage fluctuations can have a wide array of negative effects − from blowing fuses and damaging voltage-sensitive equipment to causing interference in the long, unshielded stretches of overhead power lines − potentially causing the latter to act like colossal radio transmitters if the harmonics are in the megahertz range.
Each of these phenomena compromise the economic advantages of the electric train, but they can be addressed with the right equipment. For instance, electromagnetic-compatibility devices take the incoming power signal, be it DC or AC, and strip out the unwanted higher frequencies, to leave a clean and consistent out of the device.

Stopping a speeding train
Admittedly, if Europe and eastern Asia’s railways are anything to go by, electrified rail networks are inevitable. What’s crucial is that we get the technology right, first time around.
Some 330 miles of the rail network has already been allocated to the eagerly-anticipated high-speed rail link HS2, built to link the north and south of England. After doubts about its future, the government has said the project will go ahead, although most likely delayed to 2029, with the prospect of the Covid-19 virus slowing down its introduction even further.
HS2 will be a 250mph train, and whilst the brakes have been put on HS2 for now, OEMs and engineers should consider how the trains will function safely, which includes their braking systems.
Slowing down a car is relatively simple: Depressing the brake pedal pushes two pieces of metal together, with the friction between them slowing down the vehicle. This is a simple, safe, reliable and cost-effective technology, and it works well for vehicles weighing a few tonnes. However, modern locomotives weigh 10-20 times that, highlighting certain deficiencies with the technology.
Braking a train with smaller disc brakes, as typically found in cars, would melt them away in seconds. That’s why many trains employ huge discs and callipers, across multiple axles, to achieve their breaking force. Whilst this approach works, it has several downsides such as significant weight, physical wear over time and high maintenance costs.
In high-speed trains, larger disc brakes also warp, foul (accumulation of unwanted dust and particles over time) and wear off inconsistently as the braking force varies through each rotation of the disc. This rotationally-variable braking profile causes vibrations that reduce braking performance, decrease riding comfort and increase overall maintenance requirements as the train is effectively shaken apart over time.

Shunting electrons
There is an alternative approach to slowing a massive, multi-tonne locomotive that initially might appear counter-intuitive – with electrons, one of smallest particles around! It comes from the principle that electric motors and generators are one and the same. Just as pushing power to an electric motor moves the driveshaft, moving the driveshaft causes the motor to push electrons around the wire. By resisting the flow of these electrons – with several resistors, for instance – negative torque is created on the driveshaft, a form known as “rheostatic braking”. Rheostatic brakes neatly avoid the issues experienced with larger disc brakes, because they don’t rely on physical contact and friction.
A further advantage of rheostatic braking comes from its ability to concentrate the braking resistors and, therefore, braking heat in one place. This means that heat-dissipation equipment can be focused on one area, instead of being distributed across multiple brakes on multiple axles. By designing the vehicle around this braking scheme, engineers can find additional uses for the waste heat, such as heating the carriages or powering batteries.
This approach significantly reduces complexity, costs and associated maintenance when compared with alternative methods. Design and operational simplicity is always beneficial, especially in critical systems like braking.
Rheostatic braking also allows for traction to be maintained during braking. If the braking force exceeds the friction between wheel and track, the wheel will skid. Skidding extends braking distances, so to stop the train as quickly as possible, these forces must be closely balanced.
In a mechanical system, temperature, wear and fouling are just a handful of the factors that impact the braking force. These must be accounted for, so the brakes do not exceed the traction. Comparatively, the only parameter in electrical brakes that requires adjustment is the resistance across the motor-generator, dramatically simplifying traction control.

Share this article

Share on facebook
Share on twitter
Share on linkedin

Member Login