Ride Sharing Buzz

By Seb Lindner

There’s no question that ‘ride sharing’ has changed significantly with the development and invention of new technology. Technically, the first horse and carriage taxi was patented in 1843 but some reports suggest it was happening much earlier. In 1907, the ‘Yellow Cab Co.’ was founded allowing people to easily pay for a one-off ride. We make another big jump to the innovations of companies such as Uber (founded in March, 2009) and Lyft (founded in June 2012). These companies came about at the convergence of a few major innovations: smartphones, 3G network and advanced mapping technology.

So, what does the future hold for ride-sharing? With the convergence of 5G network and autonomous vehicle technology, perhaps there could be another ride-sharing evolution.

Ride Sharing Today

Today, the technology that underpins ride sharing is not IoT (internet of things) connected cars. Instead, it relies on smartphone marketplace applications that connect drivers with passengers.

As the 5G network gets rolled out and cars become constantly connected to the internet, it is likely that drivers will rely on their vehicle dashboard instead of their smartphone. In fact, it’s already happening with infotainment systems developed by Ford, Honda and several other manufacturers working to integrate smartphone apps into the car dashboard.

Autonomous Cars

To this point, ride sharing has worked to match drivers with passengers. However, autonomous vehicle will likely disrupt this by replacing drivers with sophisticated vehicles.

Many companies are working on autonomous vehicles and you feel whoever cracks the market first may reign supreme (at least in the Western world). In fact, Waymo has already launched its self-driving vehicles in Arizona and it is charging people to use them.

So, What Happens Now?

Three of the major hurdles in developing fully autonomous vehicles suitable for ride sharing for the masses are: technological, legislative and infrastructure barriers. However, there are plenty of companies working on the tech so that will just improve over time (think Moore’s Law). Around the world (particularly in some states of the US like Arizona), legislation is now being implemented and will continue to do so. Once one state does it well, I’m sure the others will follow. Lastly, there has been significant infrastructure (charging stations etc.) developed all around the US and the world already. At this rate, you should be able to drive just about anywhere by 2025!

How cool would it be if autonomous vehicles became fully independent – recharging, parking and servicing themselves – all the while the owner of the vehicles (likely to be a fleet owned by Waymo or Uber) would be making money. That’s a future I like the sound of!!

Dec 26, 2018

Infotainment

By Seb Lindner & Tony Kerr

Connected cars already offer a range of infotainment features that few people could have predicted as little as a decade ago. In the near future, you can expect to see even more innovations that make driving safer, more convenient and more entertaining. No one knows exactly what infotainment systems will look like in 10 years, but there are some trends pointing to what you can expect within the next few years.

Buckle Up has reviewed some of the most prevalent trends so you can stay ahead of the curve. When customers ask you what to expect from the next generation of vehicles, you can tell them about the ideas some companies have already started to explore.

Vehicle Navigation Will Become More Robust

The percentage of Americans who own smartphones jumped from 35 percent in 2011 to 77 percent in 2016. Now that most people have smartphones, they don’t care as much about in-car navigation systems. They find it more convenient to use their mobile devices to get directions before they even get behind the wheel.

Car manufacturers have noticed this trend, and they’re taking steps to make in-car navigation more appealing. Toyota plans to debut its Entune 3.0 system in the 2018 Camry. Entune 3.0 takes a radically different approach to navigation than the systems you see in today’s cars.

Entry-level Camry cars will use Telenav’s Scout GPS Link to connect drivers’ phones to their cars’ navigation systems. If you look up directions while walking to your car, the directions will appear automatically on the Camry’s touchscreen. Instead of trying to fight the smartphone industry, Toyota decided to integrate it into their vehicles. It’s a smart move that should make in-car navigation more convenient.

Toyota plans to give higher trim level Camrys a different system. Like many of today’s navigation systems, Toyota’s Dynamic Navigation feature will store maps in a hard drive, DVD or SD card. Unlike today’s models, owners won’t have to visit dealerships to update their maps. Instead, the system constantly communicates with nearby Toyota Smart Centers so it always has the latest maps and directions.

Mobile In-Car Payments Will Make Wallets Unnecessary

Several companies also plan to integrate mobile payments into their vehicles. Instead of forcing drivers to pull out their phones to buy gas, food, coffee and other items, they can rely on their cars to pay for them.

Jaguar is the first company to reveal an in-car payment feature. The upcoming Jaguar XE will have an infotainment system that connects to the driver’s smartphone so it can access Apple Pay or PayPal.

Expect to see more companies follow this lead. Toyota, Ford and Honda have already mentioned similar systems, but they haven’t revealed details yet.

Infotainment Will Shift To A Subscription-Based Revenue Model

Although some car manufacturers include infotainment systems in their entry-level vehicles, a lot of companies don’t install infotainment options because they want to keep prices down. This decision is based on the assumption that drivers need to pay for the systems at the time of purchase. Some companies have discovered that they can increase revenues by relying on subscription-based models.

Several infotainment features already require subscriptions. For instance, SiriusXM costs at least $10.99 per month and Spotify Premium costs about $10 per month.

Adopting a subscription-based revenue model for more infotainment features would make it possible for car companies to earn more money from people who aren’t usually willing to pay for infotainment systems. From the consumer’s perspective, spending $10 a month for enhanced navigation feels less painful than buying an infotainment system up front. Simply giving the system to buyers could lead to higher profits.

Infotainment systems could head in unexpected directions over the next several years. You know that these features are coming to cars soon, though, because companies have already developed the technology and made plans to install them in vehicles by 2018. Check back with Small World Social for updates as these future trends come to life.

November 1st, 2018

New Mobility Safety

By Seb Lindner & Tony Kerr

With today’s focus on vehicle safety, it’s hard to believe that many car companies didn’t even include seat belts in their vehicles until the 1960s. Today, drivers want smarter, tech-savvy cars that help them avoid accidents and keep them safe when they can’t prevent collisions.

Popular Safety Features

The greatest advantage offered by the connected vehicle concept is the ability to supply information to a driver or to a vehicle itself to help the driver make safer or more informed decisions, potentially avoiding dangerous situations.

Connected cars can share information with other vehicles on the road. For instance, connected cars can share driving data with each other to choose safer routes. Instead of struggling through heavy traffic, drivers can use internet-based navigation systems to avoid congestion that makes accidents more likely.

Practically every vehicle manufacturer offers internet-ready navigation in new models. These vehicles are internet-enabled with built-in systems. The technology may not come in entry-level cars, but higher trim levels usually give shoppers the option to add smart navigation.

Tech-savvy cars share data automatically. Drivers don’t necessarily have to do anything to make their cars safer. In some cases they just have to trust the technology to make smart choices that will help them stay safe on the road, or to help them if something does go wrong.

Many connected cars feature automatic crash notifications, SOS assistance and/or roadside assistance from the manufacturer or a third-party company. Drivers can initiate these features themselves, but if the car registers a crash-like incident, these safety features will automatically go to work reaching out for help. The peace of mind this offers to consumers is a major factor in making their purchases.

These features of course rely heavily on internet service providers to keep the connectivity functioning in order for cars to stay connected. Unfortunately, BMW and its customers are going through quite the headache because of this reliance right now. All of their vehicles before 2014, and the Z4 Coupe through 2016, had 2G services through AT&T’s network. However, AT&T recently shut off 2G services in order to focus on newer technologies. So drivers of affected vehicles no longer have access to services like SOS, Concierge Calls, and BMW Assist eCalls. This is a great reminder of how reliant vehicles are on connectivity.

Mobile Apps & Devices

Not everyone wants to spend money on a high-tech car that communicates with other vehicles. These days, drivers don’t have to. They have access to plenty of apps on their phones that can make driving safer.

Some of the most useful and popular apps for safe driving include:

Each of these apps takes a unique approach to improving safety.

DriveSync offers a wealth of features designed to make driving easier and safer. It can monitor driving behaviors and give feedback, bring roadside service to a stalled car’s specific location, and even help drivers fill out accident reports.

Vinli is an ecosystem that gives drivers access to more than 40 apps. Since the apps rely on smartphone sensors, drivers don’t need smart cars to use Vinli. Some useful apps within the Vinli ecosystem include Beagle, which tracks teen driving behaviors; Revdapp, which tracks miles and car expenses; and Open Road, which features one-tap calling, navigation and music.

Zendrive uses smartphone sensors to analyze driving behaviors and offer safety suggestions. The app also assigns drivers scores based on their driving behaviors. Someone who speeds, brakes hard and drives erratically will receive a low score. Someone who takes their time and maintains control will get higher scores.

Zendrive is an excellent example of telematics use in connected cars, which can also come in the form of devices installed in the vehicle either at the time of manufacture or afterward. Telematics is the technology of sending, receiving, and storing information relating to remote objects, like vehicles, via telecommunication devices. These device systems record information about driving habits, such as the number of miles driven, speed, brake quickness, etc.

While of course these systems can be built into vehicles, like Tesla does, many third-party devices are available, such as T-Mobile’s dongle attachment and Progressive’s “Snap-Shot” program. And, as discussed, there are a multitude of mobile apps available to consumers at their fingertips.

App developers know that a lot of people aren’t willing to spend extra money on connected cars, so they have made hundreds of options that use mobile devices to improve driver safety. When you know that a customer isn’t interested in paying the higher price of a tech-savvy vehicle, you can make less-sophisticated cars more attractive by suggesting useful apps & devices.

Maintenance Tips for Connected Cars

Connected cars work best when they have the latest software. Updating the software, therefore, becomes a crucial aspect of maintenance. Without the latest version, cars and apps may not know how to collect or share data.

People who own connected cars also need to make sure they have their sensors and video cameras inspected regularly. Today’s top mechanics do more than replace mechanical parts. They can also test and replace sensors that gather information about road conditions and driving behaviors.

For consumers, keeping up with regular engine maintenance is burden enough. Now they must also stay diligent about making sure their connected car features are in prime condition. This is why maintenance information features are so important to smart car drivers – They can rely on notifications from their vehicles when it’s time to update or if an issue is detected, just as they rely on mobile device notifications for all of life’s other reminders and communication.

August 27, 2018

Autonomous Speed

Autonomous vehicles are on the rise. However, how does a vehicle with no driver know how fast to drive?

Measuring Speed

When the car gets a green light to go, it will accelerate to an appropriate speed. To determine the speed at which it is traveling, the car uses speed sensors similar to the sensor used to drive the car’s speedometer, but with a difference. The Google car’s sensors monitor the rotation of both rear wheels so that it can accurately measure the distance the car has travelled, and the speed at which it is traveling.

A modern speedometer doesn’t actually measure the rotation of the wheels. Instead, it measures the rotation of the output shaft of the gearbox which is connected to the wheels. In the original design patented by Otto Schulz in 1902, a small gear driven by either the output shaft of the gearbox, or one of the front wheels, drives a flexible rotating shaft (the speedometer cable) connected to the speedometer.

Inside the speedometer, a magnet connected to the cable rotates inside a small aluminium or copper cup. The cup is connected to the needle of the speedometer dial and there is a spring that pulls the needle back towards zero. As the magnet rotates, eddy currents are induced into the cup. The eddy currents set up their own magnetic field which, if there wasn’t a spring, would cause the cup and the needle to spin.

The spring, however, prevents the needle from spinning so that the needle comes to rest at the point where the two forces – the induced force in the cup and the force of the spring – balance each other out. The faster the magnet rotates, the more force is induced into the cup, and the further the needle is rotated.

All of this changed in the early 1990’s when the first all electronic speedometers began to appear. Electronic speedometers work a little differently to mechanical ones. To start with there is no rotating cable between the transmission and the speedometer. Instead, a sensor on the transmission output shaft sends electrical pulses to the speedometer which counts them.

So how does the sensor work?

There are a few options here. One possibility is to have a cam on the shaft that presses a micro-switch each time the shaft rotates. When the switch closes, it sends an electrical pulse to the speedometer. The disadvantage is that at high speeds, the switch will not have sufficient time to return to its resting position before the cam pushes it again, so it will be inaccurate. Mechanical switches are also prone to wearing out, which means that the switch would have to be replaced frequently.

To get around these problems, we could use an optical sensor to detect a reflective patch on the transmission output shaft as the shaft rotates. The sensor would be able to operate at high speeds and has no moving parts so it won’t wear out. The only problem is that car transmissions are notoriously dirty places. Dust and oil would gather on the sensor which would require constant cleaning. We need a sensor that is fast, has no moving parts and doesn’t mind at all if it gets caked in mud.

The sensor that is actually used is magnetic. A device known as a “hall effect sensor” is able to detect changes in the magnetic field. The sensor is placed near a small gear made of steel, sometimes with a magnet placed on the back of the sensor. As the gear rotates, the teeth of the gear move past the sensor and the magnetic field going through the sensor changes. The sensor detects these changes in the magnetic field and sends pulses to the speedometer.

Because mud and oil are non magnetic, the sensor doesn’t care at all if it gets dirty. The same type of sensor is also used in the anti-lock braking system to detect wheel rotation, and in the engine to detect the camshaft and crankshaft positions. The pulses are counted by the computer – more pulses per second means more speed.

The Google car doesn’t actually use the speedometer to measure speed. Instead, the rotation of both rear wheels is measured. This gives the system more information about how the car is actually moving. For example, as the car goes around a corner, the wheels rotate at different speeds. Having separate sensors on the wheels captures this extra information that the speedometer does not provide.

Putting it all together

So now our autonomous car has all the information it needs to navigate safely around our city. Its LIDARcan map the street layout including other cars, pedestrians, trees and the road. Its RADARs tell it how close it is to the cars in front of and behind it, its camera can see the traffic lights and it knows how fast it is going thanks to the sensors on its wheels. The GPS and mapping software in its processor unit allows it to know where it is in the world, where it is going and how it will get there.

The control software uses all of the data from the sensors and combines them using a set of rules that govern how the car should behave. The result is a system that is more alert, infinitely more patient, and not prone to fatigue like its human counterpart. The Google car is actually a better driver than any of us.

In 1973, the British science fiction writer and futurist Arthur C. Clarke postulated the third of what have come to be known as his three laws: “Any sufficiently advanced technology is indistinguishable from magic.”

To people living in the middle ages, the idea of carrying on a conversation with someone in another village via a small flat block of glass would have been deemed witchcraft. Even the moving pictures on its surface would have seemed like magic. Today we have mobile phones that allow us to talk to anyone anywhere in the world. To someone living in the early eighteenth century, the idea that people would be able to fly above the clouds from one city to another would have seemed preposterous, and yet at this very moment, around a million people are doing just that.

The idea of robots driving cars has been around since the mid twentieth century and yet right now, at the advent of the autonomous vehicle age, the prospect of self-driving cars seems more than just a little bit magical.

Autonomous Imaging

So far we’ve had a look at how the Google car uses GPS and mapping to figure out where it is and where it is going, uses LIDAR to scan and map its environment and RADAR to detect objects in its immediate vicinity. There are, however, a few more things that the car needs to know about before it can operate safely and reliably in an environment as messy and complex as a suburban street.

Getting the Picture

To begin with, the car needs some means of determining whether it is coming up to a stop sign or a red traffic light. The stop sign could be included in the mapping data used by GPS, but the map won’t show temporary traffic signs or the state of the traffic lights that change constantly and there is no system – yet – to tell the car whether the traffic light is red, green or yellow. The car has to figure that out for itself.

At a school crossing, or in an area where there are road works in progress, the car needs to be able to determine the difference between a crossing guard holding up a sign that says “stop” (stop and wait) from a guard holding up a sign that says “slow” (move ahead slowly).

A camera on board the car could be used – and is used – to solve this problem. With the sophisticated high speed digital image processing available from companies such as Nvidia and AMD, the car is able to isolate pertinent parts of the image – such as traffic lights and street signs – to determine whether it is approaching a traffic light and what state the light is in.

It is interesting to note that the kind of processing required to find a traffic light in a street scene is not that much different to the kind of processing required to turn a software model of your favorite first person video game into an image on your computer monitor.

Image processing can be done by ordinary CPUs (Central Processing Units) such as those found in your computer, but CPUs are not well adapted to image processing and are a bit slow at it. Graphic card GPUs, on the other hand, are very good at the kinds of maths that are used for image processing and manipulation.

So What’s the Difference?


Let’s have a look at the way that GPUs differ to CPUs, and how they normally operate. Most image processing operations are done using a type of mathematics known as matrix transformations.

Matrix transformations work with numbers that have multiple dimensions. An example is a point in space that can be defined with three numbers – x, y and z – that are known as the point’s coordinates. The three numbers aren’t separate, all three of them are needed together to define our point. So if you did something like move the point, you will change all three of its coordinates at the same time.

In order to calculate where the point has moved to, a normal CPU would have to calculate the coordinates one at a time- first the X, then the Y, and then the Z. A GPU, on the other hand, can calculate all three coordinates at the same time because it has a lot of computation units that we know as processor cores.

The type of processing performed by a CPU is known as Single Instruction Single Data, or SISD, which means that it can perform one operation (such as addition) on one piece of data at a time before it goes and gets the next piece of data to work on.

By comparison, GPUs can perform the same operation on many different pieces of data simultaneously. This type of processing is known as Single Instruction Multiple Data (SIMD for short), and it is much better suited to doing the matrix transformations needed for image processing.

That said, most CPUs in use today have more than one processor core and actually use Multiple Instruction Multiple Data (MIMD) processing because they are able to do different operations on different data at the same time.

As an example, your top end Intel i7 extreme processor has 10 cores, although the operating system is written to use those cores to do lots of very different things at once but not on the same problem. That way, your computer can be reading and writing to the hard drive, sending email, recalculating your spreadsheet, downloading a file and playing Cookie Jam, among other things, all at the same time. In fact your computer is doing a lot of things in the background without you being aware of it.

Compare this with the Nvidia Quadro M6000 GPU that has an astonishing 3072 unified shader cores, or the ATI Radeon Pro WX 7100 with 2304 shader cores. The large number of cores allow these GPUs to do matrix calculations on hundreds or thousands of points simultaneously. They can only do this, however, because the calculations performed by the cores are almost all the same.

Even though it is, in theory, possible to run an operating system such as Linux on a GPU, it would not be very fast because the GPU just isn’t good at doing the kind of general purpose tasks that a CPU is. Similarly, trying to make a CPU do image processing would not work well because it isn’t good at doing lots of the same thing on a large number of similar objects, such as pixels.

The Google car, of course, is not so much interested in moving points around as it is in finding a green traffic light in a street scene. Matrix mathematics can be used to tell if a pixel (or a group of pixels) belongs to part of the background (a tree, for example), or something more interesting like a traffic light.

Of course, just knowing that the light is green is not enough. A human driver will be on the lookout for other vehicles that are about to encroach into their space – running a red light, for example – and take appropriate evasive action.

The Google car is no different. By the time the traffic light has turned green, the system has already spotted potential hazards such as other cars and pedestrians using its LIDAR and RADAR, and is tracking their movements. The car will only move on at a green light if the system is satisfied that there are no other cars, pedestrians or other objects moving into its path.

Autonomous GPS

How in the world do autonomous vehicles get from A to B?

At its heart, GPS relies on triangulation, or more correctly, trilateration to determine the receiver position. By knowing the distances between the user and some known reference points, the user’s position can be determined using geometry. The reference points used by GPS are of course the satellites, whose positions and orbits are known with a high degree of accuracy. GPS satellites are grouped into 6 orbital planes with the original design calling for 4 satellites in each plane, although as of 2016 there are 32 satellites in total, which allows some redundancy and an improvement in accuracy. This means that, while four satellites are required for a GPS “fix,” at any one time at least 6 satelliteswill be visible from any point on the earth’s surface and quite often more. So, how do we measure the distance between the GPS receiver and the satellites? The technique is not unlike that used by RADAR. By measuring the time taken for radio waves to travel from the satellite to the receiver, the distance can be calculated. So, if we send a radio message to the receiver with the time that the messagewas sent, the receiver can calculate the distance by subtracting the time of transmission, ToT, from the time of arrival, ToA, to get the time of flight, ToF, and multiplying that by the speed of light: D=C(ToA-ToT) The messages that the satellites transmit contain information about their orbital positions, known as their ephemeris, as well as a timestamp, which is the time the transmission was sent. Once the positions of the satellites are known, and their distances from the receiver are known, the position of the receiver can be calculated. 

There is a problem, however, and it’s a big one.

Much like the problem with determining longitude I talked about last time, if the local clock on the receiver is not perfectly synchronised with the clock on the satellite, the time difference will not be correct. Because we are using radio waves that travel at the speed of light – 300 million metres per second – even a small error in the local clock will result in a large error in distance. If the local clock is off by just one millionth of a second, for example, the distance calculation will be out by 300 metres. One solution is to use an extremely accurate clock that is set to the same time as the satellites – in much the same fashion as the marine chronometers of old were set to Greenwich time. The problem with this idea is that only atomic clocks are accurate enough. Atomic clocks are very expensive and tend to be big and heavy.You won’t get a caesium clock to fit into your mobile phone, for example, and even if you could the phone would set you back over $50,000. We need a way to get around the problem that doesn’t involve having an expensive atomic clock in every receiver. 

Fortunately, we can use a bit of geometric chicanery to solve the problem.

 To begin with, we need to know the positions (x,y,z) of the satellites, as well as the time the transmissions were sent – we calculate these from the ephemeris data and the time of transmission in the satellite messages.  

Our local position can be seen as a point in 4 dimensional space – x, y, z and T. The time dimension being the (unknown) difference between the local clock and the GPS standard used by the satellites. In order to calculate our position in 4 dimensional space, we will need 4 measurements, therefore at least 4 satellites. Between the receiver and the satellites we have four pseudoranges, one for each satellite. These are distances that we can initially calculate from the time of arrival (according to our local clock) and the time of transmission. We know that the pseudoranges include range errors due to the time difference between the local clock and the GPS standard, among other things, which is why they are referred to as pseudo (false) ranges. We can now get a bit mathematical. The four pseudoranges can be described using a set of four equations that relate the x, y and z positions of the four satellites, the x, y and z positions of the receiver and the range error. This set of equations can be rearranged and solved simultaneously (a mathematical technique, not “at the same time”!) to calculate the x, y, z and T co-ordinates of the receiver. There are various mathematical techniques available to solve these equations, none of which are simple, but once reformulated the calculations can be performed very quickly by a microprocessor. Given the speed with which modern microprocessors run, you might be tempted to ask why a GPS receiver takes at least 30 seconds to obtain a “lock” on the satellites. The answer is that space communications are tricky. You may have heard that the deep space probes out near the edge of our solar system send data back to Earth at an agonisingly slow pace, with data rates measured in hundreds of bits per second compared with the gigabits per second of your high speed internet connection. The reason is that the transmitter is not very powerful and the distance the signals have to travel is very long. The signal gets lost in noise along the way, so we have to use some trickery to recover it. The same is true of GPS satellites. The signal at the Earth’s surface from each satellite is around 0.3 femtowatts (1/3 of a millionth of a billionth of a watt). To recover the signal, GPS uses a similar arrangement to CDMA (Code Division Multiple Access) that uses a mathematical technique known as autocorrelation. Autocorrelation relies on sending the same signal over and over and by adding all the copies together, the signal gets bigger while the noise averages out to zero (although the actual maths is more complex than this). When all is said and done, each satellite has to repeat itself over and over so it takes 12 ½ minutes to send all 25 frames of its navigation message. Each frame of the navigation message is 1500 bits long and contains information about the satellite’s local time and clock offset, it’s ephemeris data, information about the ionosphere and the satellite’s status as well as a portion of the satellite almanac. The almanac is a list of all the satellites and their orbital information although it is not as accurate as the ephemeris. Each frame takes 30 seconds to transmit, which is why a GPS fix takes at least 30 seconds to acquire – it takes that long for the satellites to tell the receiver where they are. Because the almanac is out of date after a few months, a receiver that has been switched off for some time will have to perform a “cold” acquisition, which takes at least 12 ½ minutes because the receiver must download the entire almanac. 

So, just how accurate is GPS, anyway?

GPS was designed for use by the Military, and measures designed to deny the enemies of the USA access to highly accurate global positioning initially provided civilians with a location accuracy to within around 100 metres. Given that alternative navigation systems only provide accuracy to within a few kilometres, 100 metres was a significant improvement. Today, however, the system is much more accurate, with high precision aerospace GPS units able to provide an accuracy of around 2 metres 95% of the time. There are various sources of error such as the ionosphere that tend to degrade the performance of GPS, although its accuracy is improving all the time. In the 1970’s, initial estimates indicated that the accuracy of civilian GPS would probably not be much better than about 30 metres, although developments and improvements over the past few decades have brought that figure down to around 3 metres. 

As with everything, you get what you pay for and more accurate GPS units tend to be more expensive.

With an accuracy somewhere between 3 and 10 metres, GPS isn’t quite good enough to tell the Google car exactly where it is on the road, but on a good day your car GPS can be good enough to tell you what lane you are in. When combined with Google’s mapping software, however, GPS becomes an essential tool that allows the Google car to navigate to its destination. The astute will have realized that if it takes 30 seconds to transmit a frame with the satellite’s position, the receiver will only get an update every 30 seconds. 30 seconds between updates is a long time when you are driving at freeway speeds – in 30 seconds you could be well past the point where you need to turn. So how can GPS possibly be accurate when it is moving? GPS receivers designed to be used in vehicles also use inertial navigation to perform short term dead reckoning between GPS fixes. By monitoring the vehicle’s acceleration, it is possible to determine with a reasonable degree of accuracy how fast and in which direction the car is traveling, so dead reckoning can be used to calculate the vehicle’s position. By adding a gyroscope and compass, the receiver can make quite good estimates of its instantaneous position and speed. For example, if the car slows and turns left the accelerometer and gyroscope will detect the change in speed and direction and the receiver will know it has slowed down and turned left instead of traveling straight ahead at a constant speed. Combine this with the Google mapping software, and the GPS unit knows that you turned left into a side street, for example, and can update the mobile map accordingly. Now, inertial navigation is not good over longer times – gyroscopes and accelerometers tend to drift with time and must be corrected. Even the compass in your mobile phone has to be recalibrated from time to time – but over short time periods, 30 seconds say, inertial navigation is just what the doctor ordered. Without GPS and Google’s maps, the Google car would be able to avoid obstacles such as other cars, pedestrians, and the gutter, but would be completely unable to figure out where to go, how to get there and where to turn. With it, however, the Google car can be a better driver than most of us.

Autonomous Environment Mapping

In a previous post, we explained how RADAR can detect objects by bouncing radio waves off them and how, by timing the echoes and scanning the environment using directional antennas, it is able to tell where the objects are. LIDAR (Light Detection and Ranging) is the same thing, but uses light instead of radio waves. One advantage that LIDAR has over RADAR is that it will detect non-metalic objects. Anything that reflects light can be seen by LIDAR. The 3D rangefinder in the Google car is a modern form of LIDAR. One of the problems that has to be overcome by RADARs is that the wavelength of the radio waves is not that much less than the size of the antenna. The mathematics is a bit involved, but it means that the antenna produces unwanted smaller beams to either side of the main one. These smaller beams are known assidelobes, and can lead to bearing errors and false detections if they are not accounted for. The main beam also gets wider as the antenna gets smaller relative to the wavelength. Generally speaking, larger antennas produce smaller sidelobes and narrower beams, but of course we want smaller antennas not larger ones!  

Light Rather Than Radio

Instead of radio waves, LIDAR of course uses light — typically with a wavelength of around 1um. This means that the “antenna” — the mirrors and such that steer the beam – are an order of 10,000 times larger than the wavelength of the light. The sidelobes are therefore so small and so close together that they blend in with the main beam. It also means that the LASER beam can be very narrow —around 30 centimeters wide a kilometer away. This is good and bad: It’s good because you can use it to pinpoint the distance of a specific object (which is how it is used on the battlefield), but it’s bad because it can only look at a very small part of the environment at a time. If the beam is pointing straight ahead, for example, it won’t pick up an object that is only a few inches to the side 100 meters away. So, how do we get around this? Well, the laser beam doesn’t HAVE to be as narrow as it can be. Using lenses, the beam can be made as wide as we like, within reason. The LIDAR used by Google in their prototypes has abeamwidth of 0.4 degrees, which is 20 times wider than a typical beamwidth for an infrared laser. A beam that is 0.4 degrees wide is still only going to see a narrow slice of the world when it is scanned, so the beam has to be scanned vertically as well as horizontally in order to get a more complete map of the environment. There are two options here: 1. The first is to scan in a spiral pattern, starting at the upper bound of the scan, for example, then lower the beam one step each time the scanner head rotates horizontally. 2. The alternative is to scan vertically, scanning a large number of vertical stripes as the scanner head rotates horizontally. Both of these methods suffer the same problem. It takes time to do a complete 3D scan. If the scanner head rotates 10 times a second, say, the first method will take several seconds to do a complete 3D scan. The reaction time of the sensor means that the second approach will also take about the same amount of time, withthe beam scanning rapidly in the vertical direction and taking several seconds to rotate horizontally. This type of scanning arrangement becomes a trade-off between resolution and scanning speed, with high resolution scans taking several seconds to complete. A 3D rangefinder that takes, say, 5 seconds to do a complete scan is not going to be very useful on a car. By the time the rangefinder has spotted an object, the car could already have run into it. So there is a trick to get around this problem. The rangefinder has not one, but 64 laser/sensor pairs producing beams stacked one on top of the other (they are actually scattered slightly horizontally to make the scanning head smaller). Having a number of sensors all scanning at the same time provides the best of both worlds – good vertical and horizontalresolution as well with the ability to perform several scans every second. The Google car prototypes use a Velodyne HDL-64E LIDAR, which has a vertical coverage of 26.9 degrees with a resolution of 0.4 degrees. In the horizontal plane, the resolution is as high as 0.08 degrees, which means that each beam records 4000 range points per revolution. The resolution/scanning trade-off is still there,however, which means that the highest resolution is only achievable at the lowest scanning rate (i.e 0.08 degrees at 5 revolutions/second). If a higher scanning rate is required, the horizontal resolution drops (0.35 degrees at 20 Revolutions/second, about 1000 range points per revolution per beam) because the beam is effectively smeared out in the horizontal plane.  

LIDAR Is Effective But Pricey

To say that modern LIDARs are “just like a RADAR but using light,” then, is a little dismissive of the capability that LIDAR provides. LIDAR gives the control system in the Google car the ability to quickly build up a detailed map of its environment even to the degree that it can tell which way a person is facing (and therefore which way they are moving) without resorting to the complex and highly computationally intensive technique of image processing a live video stream. All of this comes at a cost, however. The price of the Velodyne HDL-64E LIDAR is around $75,000. Obviously, with the cost of a single component being more than the cost of the entire car with the processing and control system, a much cheaper solution is needed in order to bring autonomous vehicles to the mass market. Velodyne and other companies are responding to the call. Velodyne has announced a lower spec version of the HDL-64E called the “Puck” — with 16 beams instead of 64 — at a much more moderate price of around $8,000. As with all technology, however, increasing demand will lead to mass production and economies of scale that will see the price of LIDAR units drop even further.

Autonomous Eyes – How AVs See

It is easy to fall into the misapprehension that autonomous vehicles navigate around their world in the same fashion as we do – by “seeing” the environment using stereoscopic vision to determine the relative positions of the car and the elements of its surroundings, but this is generally not the case. The task of interpreting a pair of stereoscopic images to produce a 3D map of one’s environment is an extremely complex one that requires much more computing power than can be currently squeezed into a reasonably-priced car. This type of vision based system is on the horizon however. 

Mapping Distances and Angles

The primary sensor used by the Google car is a 3D scanning LASER Rangefinder that is used to build up a map of the objects surrounding the car. This map is not like a picture in the traditional sense, but rather a list of distances and angles.The rangefinder is really a version of LIDAR (Light Detection and Ranging). LIDAR itself is a development of RADAR (Radio Direction And Ranging) that uses infrared light instead of radio waves. Before continuing with the topic of the 3D range finder then, perhaps a word on RADAR is appropriate.It’s worth noting that many of the sensors used by vehicles such as the Google car are active, unlike human senses that are mostly passive. What do I mean by active and passive? If you took a person and put them in a dark, soundproof room and asked them to quietly walk around the walls you would have difficulty detecting them and figuring out exactly where they were. This is because, apart from body heat, the person is not emitting any signals. Our senses are more like TV receivers – they rely on external sources to generate the signals that we can detect.

Active sensors, on the other hand, rely on a self-generated signal that is used to illuminate the environment around them. You would be able to locate a bat in the same dark, silent room because the bat uses echolocation to figure out where it is in the dark. The bat makes clicking noises with its mouth that come back as echoes that tell the bat the location of objects around it. 

Before we get on to RADAR, let’s have a quick look at a much simpler device: the electronic tape measure. The tape measure uses a similar approach to that of the bat. An ultrasonic transducer emits a short pulse of high frequency sound that travels through the air and bounces off an object such as a wall, back to the transducer which is used to receive it. Because the speed of sound through air is reasonably constant, the time taken for an echo to come back can be used to calculate the distance to the object. In the case of the tape measure, the sensor only measures distance. There is not enough information to determine direction. The reversing sensors in your car also use this type of ultrasonic distance sensor. RADAR works in a similar fashion, but with some important differences.

Determining Positions Over Long Distances

RADAR was first demonstrated during the second world war in England. It worked by transmitting pulses of HF (High Frequency) radio waves broadly in the direction that incoming aircraft might be expected to be coming from (the other side of the English channel, for instance). Some of the radio waves would reflect off the metal skin of an object such as an aircraft back towards a receiver on the English coast. By measuring the time that the reflections took to travel from the transmitter, to the target, and back to the receiver, the distance between the transmitter, aircraft and receiver could be calculated. By using a receiver with a directional antenna, the direction of the aircraft, and therefore its position could be determined. Since then, RADAR has been refined and miniaturized.

With a few exceptions, modern RADAR systems generally don’t work quite like the original WWII English version. Instead of separate transmitter and receiver antennas, the transmitter and receiver both use the same antenna, which is moved to scan the area of interest. A RADAR antenna has a very narrow sensitive zone, or “beam,” which means that the direction of a reflected wave can be accurately determined. The RADAR systems at airports are of this type. You could think of a rotating antenna RADAR as being a bit like a searchlight beam sweeping around a dark landscape. You can only see those things that are illuminated by the beam.

3D RADAR systems use a similar concept, but a different means of achieving it. Instead of moving the antenna, the antenna is of a special type called a “synthetic aperture” antenna that, with some sophisticated signal processing, can move the beam around without moving the antenna. The beam is then scanned using software. These RADAR systems can determine the direction in both bearing (the left/right plane) and elevation (altitude) as well as distance.

Another different type of RADAR uses the Doppler effect. When an object is moving towards or away from a transmitter, the reflected wave returns to the receiver shifted slightly in frequency. An object coming towards the transceiver will give an echo that is slightly higher in frequency and an object that is moving away will give an echo that is slightly lower in frequency. The frequency difference, or shift, is directly related to the speed with which the object is moving relative to the transceiver.

The radar guns that we all know and love used by police to measure speed use Doppler RADAR. A highly sophisticated 3D search RADAR such as those used by the Military combines all of these techniques and more with some very sophisticated processing to give a system that can tell the size, distance speed and direction of travel of aircraft and other vehicles in their vicinity. 

Lots of RADARS in Autonomous Vehicles

RADAR sensors are good for detecting objects that are close and moving quickly (such as curbs, other cars and pedestrians), although the type in the Google car are not that good at measuring direction. The Google car has 4 bumper mounted RADAR sensors that are more like the electronic tape measure I mentioned earlier. This type of RADAR is also used to help aircraft pilots avoid collisions with terrain. Used in this way it is called a RADAR altimeter, and it works by bouncing radio waves off the ground to figure out how far the ground is away from the aircraft. If the aircraft gets too low, a terrain warning is sounded in the cockpit.

In a similar way, the Google car is able to tell if it is getting too close to the cars in front of it, and is able to detect objects such as pedestrians that appear in its path. By using Doppler, the RADAR sensors are able to tell if the car in front has changed its speed — by braking sharply, for example.

Google used a modified Toyota Prius as the basis for their Google car prototypes. If you look at the pictures of the early prototype Google cars you can see two of the RADAR units just in front of the left and right mirrors. They aren’t very big, maybe 7-10cm on a side. The reason that modern RADARS are so small is that they use very high frequency radio waves in the microwave spectrum. It might be a bit of a concern for some that there will be cars driving around radiating microwaves all over the place, but here is no need to be concerned. The power levels are very low – about the same as your home WiFi router or your mobile phone.

Autonomous Braking

As we move about in the world, we take for granted the tools we use to determine where in the world we are. Our 6 primary senses allow us to experience the world – although there are at least another 4 categories of non-traditional senses, but more on that in a minute.

By being able to see and hear (and to a lesser extent, feel) elements of our environment, we are able to navigate around in it without bumping into dangerous things, falling off precipitous heights or getting ourselves into myriad other forms of harmful situations.

So our primary senses for navigation are sight and sound, although visually impaired people are able to turn their other senses to the purpose of navigation. It would be a dull life indeed, however, if moving around were the only purpose of our senses, and touch, taste and smell work to make our experience of the world the wonderfully immersive experience that it is.

The five primary senses are not the whole story however. There are other senses such as vestibular senses (balance and acceleration), themoception (heat/cold), proprioception (kinetic/movement) and nociception (pain).

The inner ear provides us with vestibular senses that allow us to perceive acceleration in the three axes as well as the angle our heads are at relative to the horizontal. Of course we are all familiar with the nausea that comes from confusion between the vestibular senses and sight – an experience we get when on a merry-go-round for example.  Thermosensors in our skin and mouth perceive heat and cold which allows us to avoid things that will burn us or freeze us, and proprioceptors allow us to just KNOW where our hands and feet are without having to see them. Without proprioception, we wouldn’t be able to walk, let alone touch type!

Nociception typically works as a warning system causing pain when things are not as they should be. Nociceptors are able to report injuries due to temperature, pressure and chemicals, and are found in the gut, the skin, the mucosa and corneas. There are no nociceptors in the brain which is why the brain doesn’t feel pain.

Sight, Sound, and Magic?

In a similar way, machines rely on sensors to perform the functions we ask of them. A simple machine such as an oven, for example, has a temperature sensor in it that allows its temperature control system to keep the air temperature inside it reasonably constant. Anyone who has tried to cook in an oven with a broken thermostat will know how handy a temperature sensor is!

The development of the autonomous car by Google seems at first glance almost to be a feat of magic. Journalists are awed by the experience of taking their hands off the steering wheel and allowing he car’s automation to take over. Autonomous cars are proving themselves to be far safer than their human piloted counterparts — as a recent Youtube video demonstrated, showing just how good the predictive software in the Tesla autopilot is by braking to avoid a collision that had not even happened yet.

So how does the Google car and its autonomous driving ilk perform these feats of magic? The “magic” is a combination of clever software and some well placed sensors.

The sensors in the Google car share a lot of similarities with human senses, although there are also many differences. Over the next few weeks, we’ll be posting more about the sensor technology that is used to perform the magic that is the autonomous vehicle.