Guiyang Automated Vehicle Project

The City of Guiyang is planning to become the “Big Data Valley” of China. Timing is perfect for the Guiyang Automated Vehicle Project. Guiyang has created an express bus system, “high iron” (Gao Tia) bullet train terminal, and a dedicated lane for buses only — the BRT (Bus Rapid Transit). The Number Two Ring Road goes around the city and allows buses, traveling at relatively slow speed, to circle the city in about an hour. The ring road has no stop lights and relatively few traffic slow downs.

See: http://www.tourguizhou.com/brt-bus-rapid-transit-guiyang/ and http://www.tourguizhou.com/big-data-automated-vehicles-guiyang/ .

The USAInfo, LLC company is a one-man operation which has published this website and is now attempting to “jump start” automated vehicle development in Guiyang, giving focus to the Guiyang “Big Data” and BRT efforts. The one-man format is agile and allows rapid dissemination of information based on collaborators input. All input is considered “public” for purposes of promoting the Guiyang Project. USAInfo, LLC makes no warranties and accepts no responsibility for how the user makes use of this information.

A focus on automated vehicles in the new “Big Data Valley of China” can attract both financial and human capital to the city. This will allow Guiyang to effectively “Leapfrog” the competition. Information presented here will be amended as more contributions occur. The presentation is made from sources deemed reliable is is provided without attribution. It is up to the user to confirm such information that he/she may deem useful.

There is already a system design in place, created by the local authorities to maintain an efficient and safe transport interface between the BRT and the Gao Tia terminal (Beizhan). While the authorities may not have been thinking about vehicle automation at the time of the design, the safety and convenience of their thoughtfulness make an AV system very feasible.

The AV technology will control cars, buses, and trucks in the future, but the timing is the issue. It will require government cooperation and initiatives to make it happen. The role of private industry in this essentially “public” transportation can only be determined by the government, and that decision process is beyond the scope of the USAInfo, LLC initiative. We know that AV will happen in Guiyang because technology and industry giants will not be deterred. The government in Guiyang (and China) has been proactive in many transportation issues. The question is about implementation of this technology and to what degree government involvement will be proactive or reactive.

Technology:

What Technology does AV development utilize?

For the time being, let us refrain to call these “driverless” cars. None of the prototypes you see today (except Google‟s) is actually driverless. Let‟s use the term “autonomous vehicles (AV)” or “automated vehicles)”.

An autonomous vehicle relies on primarily three functional blocks:-

  1. PERCEPTION (Sensors)
  2. DECISION MAKING (Algorithms & Processing)
  3. MANIPULATION (Actuators)

What Sensors are used in AV?

We‟re dealing here with 1st i.e. perception systems. The perception systems or AD sensors are primarily categorized in two forms viz.

Proprioceptive sensors (responsible for sensing of vehicle’s state like wheel encoders, inertial measurement unit etc.) 2. Exteroceptive sensors (responsible for sensing the ambient surrounding like cameras, LiDAR, RADARs, ultrasonic, etc.)

The exteroceptive sensors are of particular importance to any AD application since it has to deal with the external environment. Following are some examples:-

Vision-Based (Cameras):– Camera-based systems are either mono-vision i.e. having one source of vision or stereo-vision i.e. a set of multiple (normally two) mono-vision cameras just like human eyesight. Depending upon the needs, they may be mounted on the front grilles, side mirrors, and rear door, rear windshield etc. They closely monitor the nearby vehicles, lane markings, speed signs, high-beam etc. and warn the driver when the car is in danger of an imminent collision with a pedestrian or an advancing vehicle. However, the most advanced camera systems do not only detect obstacles but also identify them and predict their immediate trajectories using advanced algorithms.

RADAR:– Both short-range and long-range automotive-grade RADARs are used (mostly in the narrow-band i.e. 27–77 GHz) for AD applications. Short-range radars, as the name indicates, „senses‟ the environment in the vicinity of a car (~30m) and, especially at low speeds; whereas, long-range radars cover relatively long distances (~200m) usually at high speeds. Generally, the radar sensor acquires information from nearby objects like distance, size, and velocity (if it is moving) and warns the driver if an imminent collision is detected. Should the driver fails to intervene within the stipulated time (post-warning), the radar‟s input may even engage advanced steering and braking controls to prevent the crash. The high-precision and weather agnostic capabilities of radars make them a permanent fit for any autonomous vehicle prototype, notwithstanding the ambient conditions. Going forward, with the introduction of ultra wide-band radar technology (high frequency ~100 GHz), radars will provide more accurate information, be smaller, cheaper and more reliable.

 LiDARs:- In layman terms, LiDARs are “light-based radars” that send invisible laser pulses and ascertain their return time to create a 3D profile around the car. Unlike cameras and radars, LiDARs do not technically detect the nearby objects; rather they “profile” them by illuminating the objects and analyzing the path of the reflected light. This, when repeated overa million times per second, yields a high resolution image. Since LiDAR sensor uses emitted light, its operation is not impaired, notwithstanding the intensity of ambient light which means same intensity in night or day, clouds or sun, shadows or sunlight. The result is a greater accuracy of perception and high resilience to interference. Currently, the LiDAR sensors come at a very high price. The LiDAR alone makes the entire sensor suite that goes into a vehicle exorbitantly high. For example, the Google driverless car features a high-quality Velodyne‟s LiDAR costing $75,000. In future, with solid-state technology coming in, the cost will come down drastically making LiDARs indispensable for any AV.

 Ultrasonic:– Ultrasonic sensors use the same “time of flight” principle as RADAR do, except for the fact that the former uses high-frequency sound waves instead of microwaves. Ultrasonic emissions are effectively sounds waves with frequencies higher than that audible to the human ear, suitable for short to medium range applications at low speed. Using echo-times from sound waves that bounce off nearby objects, the sensors can identify how far away the vehicle is from said object, and alert the driver the closer the vehicle gets. Automakers are already using these sensors, albeit only for the short range applications. For example, Tesla‟s Model S sedan is equipped with 12 long-range ultrasonic sensors that provide 360-degree vision to augment the forward facing RADAR system, in order to enable its Autopilot system.

 Wheel speed sensors:-Wheel speed sensors are designed for the rollover sensing application. They record the speed of the wheels by measuring accelerations both in the vehicle‟s longitudinal and vertical axis and communicate this information to the driving safety systems. It can detect a vehicle rollover event using the angular rate signal and a rollover sensing algorithm. Wheel speed sensors can be either passive i.e. with an additional power source or active. The initial anti-lock braking systems (ABS) relied on passive sensors whose signal could only be evaluated within the 5-10 mph speed range. Nowadays active sensors are becoming more popular because of their “digital” nature that is used by the control unit directly, without the need for conversion. Active wheel sensors also supply more precise speed information, which can be used by other vehicle electrical systems, such as navigation devices.

AV are not merely a large constellation of sensors working together. There is a common robotic adage“sensing is easy, perception is difficult”. In order to enable AD functions, the vehicle should perceive the environment with very high precision and reliability. Sensors must learn from its environment gradually and be intelligent enough to refrain from detecting the ‘false positives’ i.e. ghost objects and ‘false positives’ i.e. blindness. Modern vehicles rely on a combination of different sensors and most importantly, their ability to “fuse” the data (sometimes @ 1gbps) emanating from various sources. This is commonly known as sensor fusion.

Each sensor technology has its own shortcomings which make it difficult for any of them to be used as a stand-alone system. Vision-based systems may be impaired in bad weather scenarios. Furthermore, what happens if an important sensor fails or produces erroneous results owing to its malfunctioning.

Notwithstanding the capabilities of each sensor-based system, the failure of any or all sensors is unendurably high. One way to minimize this is by fusing the strengths of vehicle‟s legacy and advanced sensor units and create multiple overlapping data patterns to ensure that the quality of the processed sensory input is as accurate as possible. A fused sensor system combines the benefits of multiple sensors i.e. radars, LiDARs, GPS, and cameras to construct a hypothesis about the state of the environment the vehicle is in.

The Google Sensors

  1. Laser Range Finder

The heart of Google’s self driving car is the rotating roof top camera, LIDAR, which is a laser range finder. With its array of 64 laser beams, this camera creates 3D images of objects helping the car see hazards along the way. This device calculates how far an object is from the moving vehicle based on the time it takes for the laser beams to hit the object and come back. These high intensity lasers can calculate distance and create images for objects in an impressive 200m range.

  1. Front Camera for Near Vision

A camera mounted on the windshield takes care of helping the car ‘see’ objects right in front of it. These include the usual suspects- pedestrians, and other motorists. This camera also detects and records information about road signs and traffic lights, which is intelligently interpreted by the car’s in built software.

  1. Bumper Mounted Radar

WAYMO – The Google Self-Driving Car 10 Major Technologies

Four radars mounted on the car’s front and rear bumpers enable the car to be aware of vehicles in front of it and behind it. Most of us are familiar with this technology as it is the same as the adaptive cruise control systems our cars are based. The radar sensor on the car’s bumpers keeps a ‘digital eye’ on the car ahead. The software is programmed to (at all times) maintain a distance of 2-4 seconds (it could even be higher) vis-a-vis the car ahead of it. So with this technology the car will automatically speed up or slow down depending on the behavior of the car/driver ahead. Google’s self-driving cars use this technology to keep passengers and other motorists safe by avoiding bumps and crashes.

  1. Aerial That Road Precise Geo-Location

An aerial on the rear of the car receives information about the precise location of the car, thanks to GPS satellites. The car’s GPS inertial navigation unit works with the sensors to help the car localize itself. But GPS estimates may be off by several meters due to signal disturbances and other interferences from the atmosphere. To minimize the degree of uncertainty, the GPS data is compared with sensor map data previously collected from the same location. As the vehicle moves, the vehicle’s internal map is updated with new positional information displayed by the sensors.

  1. Ultrasonic Sensors on Rear Wheels

An ultrasonic sensor on one of the rear wheels helps keep track of the movements of the car and will alert the car about the obstacles in the rear. These ultrasonic sensors are already in action in some of the technologically advanced cars of today. Cars that offer automatic ‘Reverse Park Assist’ technology utilise such sensors to help navigate the car into tight reverse parking spots. Typically, these sensors get activated when the car is engaged in the reverse gear.

  1. Devices within the Car

Inside the car are altimeters, gyroscopes, and tachometers that determine the very precise position of the car thanks to the various parameters they measure. This offers highly accurate data for the car to operate safely.

  1. Synergistic Combining of Sensors

All the data gathered by these sensors is collated and interpreted together by the car’s CPU or in built software system to create a safe driving experience.

  1. Programmed to Interpret Common Road Signs

The software has been programmed to rightly interpret common road behavior and motorist signs. For example, if a cyclist gestures that he intends to make a maneuver, the driverless car interprets it correctly and slows down to allow the motorist to turn. Predetermined shape and motion descriptors are programmed into the system to help the car make intelligent decisions. For instance, if the car detects a 2 wheel object and determines the speed of the object as 10mph rather than 50 mph, the car instantly interprets that this vehicle is a bicycle and not a motorbike and behaves accordingly. Several such programs fed into the car’s central processing unit will work simultaneously, helping the car make safe and intelligent decisions on busy roads.

  1. Mapping in Advance

At the moment, before a self-driven car is tested, a regular car is driven along the route and maps out the route and it’s road conditions including poles, road markers, road signs and more. This map is fed into the car’s software helping the car identify what is a regular part of the road. As the car moves, its Velodyne laser range finder kicks in (see point 1) and generates a detailed 3D map of the environment at that moment. The car compares this map with the pre-existing map to figure out the non-standard aspects in the road, rightly identifying them as pedestrians and/or other motorists, thus avoiding them.

  1. Programming Real Life Behavior

Google engineers have programmed some real life behavior in these cars. While the vehicle does slow down to allow other motorists to go ahead, especially in 4 way intersections, the car has also been programmed to advance ahead if it detects that the other vehicle is not moving.

Though Google’s self-driving car is not here yet, all this technology sure does make it exciting. And perhaps we are closer to driving one than we let ourselves believe.

TESLA is making progress as reported below:

The following information about Tesla efforts doesn’t clearly distinguish between “Sensors”, “Decision Making”, and “Manipulation”. A simplification of this technology process might be like what we have all experienced on elevators. In the early days of elevators there was a human operator. You told him where you wanted to go and he visually (Sensor) aligned the elevator with the respective floor levels, decided which floor to take you to (Decision Making), and then applied power (Manipulation) to raise and lower the elevator. There was a lot of sensor activity, decision making and manipulation of doors and lift motors to get you where you wanted to go. We have since fully automated the process, and the elevator operator occupation is a distant memory. Perhaps the first “autopilot” in car automation was the speed control. You turn on speed control and the car maintains that speed until you (the operator) touch the brake or accelerator at which time you assume “manual” control. As you can see, a wide range of products can be incorporated into the AV test cars without raising concerns of permitting authorities. Your headlights turning on at dusk is another example. If you think about it, we have a lot of automation already. Keep this in mind as you read about the Tesla efforts.

Tesla Autopilot Systems
All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.

Advanced Sensor Coverage
Eight surround cameras provide 360 degrees of visibility around the car at up to 250 meters of range. Twelve updated ultrasonic sensors complement this vision, allowing for detection of both hard and soft objects at nearly twice the distance of the prior system. A forward-facing radar with enhanced processing provides additional data about the world on a redundant wavelength that is able to see through heavy rain, fog, dust and even the car ahead.

Processing Power Increased 40x.
To make sense of all of this data, a new onboard computer with over 40 times the computing power of the previous generation runs the new Tesla-developed neural net for vision, sonar and radar processing software. Together, this system provides a view of the world that a driver alone cannot access, seeing in every direction simultaneously, and on wavelengths that go far beyond the human senses. Enhanced Autopilot
Enhanced Autopilot adds these new capabilities to the Tesla Autopilot driving experience. Your Tesla will match speed to traffic conditions, keep within a lane, automatically change lanes without requiring driver input, transition from one freeway to another, exit the freeway when your destination is near, self-park when near a parking spot and be summoned to and from your garage.

Tesla’s Enhanced Autopilot software has begun rolling out and features will continue to be introduced as validation is completed, subject to regulatory approval [emphasis added].

On-ramp to Off-ramp
Once on the freeway, your Tesla will determine which lane you need to be in and when. In addition to ensuring you reach your intended exit, Autopilot will watch for opportunities to move to a faster lane when you’re caught behind slower traffic. When you reach your exit, your Tesla will depart the freeway, slow down and transition control back to you.

Smart Summon
With Smart Summon, your car will navigate more complex environments and parking spaces, maneuvering around objects as necessary to come find you.

Full Self-Driving Capability
Build upon Enhanced Autopilot and order Full Self-Driving Capability on your Tesla. This doubles the number of active cameras from four to eight, enabling full self-driving in almost all circumstances, at what we believe will be a probability of safety at least twice as good as the average human driver. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat. For Superchargers that have automatic charge connection enabled, you will not even need to plug in your vehicle.

All you will need to do is get in and tell your car where to go. If you don’t say anything, the car will look at your calendar and take you there as the assumed destination or just home if nothing is on the calendar. Your Tesla will figure out the optimal route, navigate urban streets (even without lane markings), manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed. When you arrive at your destination, simply step out at the entrance and your car will enter park seek mode, automatically search for a spot and park itself. A tap on your phone summons it back to you.
Please note that Self-Driving functionality is dependent upon extensive software validation and regulatory approval, which may vary widely by jurisdiction. It is not possible to know exactly when each element of the functionality described above will be available, as this is highly dependent on local regulatory approval. Please note also that using a self-driving Tesla for car sharing and ride hailing for friends and family is fine, but doing so for revenue purposes will only be permissible on the Tesla Network, details of which will be released next year.

From Home
All you will need to do is get in and tell your car where to go. If you don’t say anything, your car will look at your calendar and take you there as the assumed destination. Your Tesla will figure out the optimal route, navigating urban streets, complex intersections and freeways.

To your Destination
When you arrive at your destination, simply step out at the entrance and your car will enter park seek mode, automatically search for a spot and park itself. A tap on your phone summons it back to you.

IN THE REALM OF DECISION MAKING AND MANIPULATION
Foundations of AI all have application and are fundamental to future AV activities. Foundations of Robotics and Motion Control:

The R&D on Autonomous Vehicle Systems are underlined with the following fundamentals and knowledge bases:

(1) Identification and estimation:

Adaptive, predictive and robust control

Intelligent control

Intelligent signal processing

Natural language processing

Data mining and Big data processing

Semantics and ontologies

Artificial intelligence

Deep learning

Image processing

Pattern recognition

Forecasting

Robotics: motion, manipulation and control

Autonomous systems and control

Computational intelligence

Artificial neural networks

Fuzzy systems

Genetic algorithms

Neuro-fuzzy systems

Artificial immune systems

Swarm intelligence

Machine vision

Sensors, perceptions, and sensor fusions

Smart materials and smart structures

Biomedical engineering

Biomechanics and biodynamics

Human-machine interface and interaction

 

 

 

(2) Artificial Intelligence:

Brain models, Brain mapping, Cognitive science; Natural language processing; Fuzzy logic and soft computing; Software tools for AI; Expert systems; Decision support systems; Automated problem solving; Knowledge discovery; Knowledge representation; Knowledge acquisition; Knowledge-intensive problem solving techniques; Knowledge networks and management; Intelligent information systems; Intelligent data mining and farming; Intelligent web-based business; Intelligent agents; Intelligent networks; Intelligent databases; Intelligent user interface; AI and evolutionary algorithms; Intelligent tutoring systems; Reasoning strategies; Distributed AI algorithms and techniques; Distributed AI systems and architectures; Neural networks and applications; Heuristic searching methods; Languages and programming techniques for AI; Constraint-based reasoning and constraint programming; Intelligent information fusion; Learning and adaptive sensor fusion; Search and meta-heuristics; Multi-sensor data fusion using neural and fuzzy techniques; Integration of AI with other technologies; Evaluation of AI tools; Social intelligence (markets and computational societies); Social impact of AI; Emerging technologies; and Applications (including: computer vision, signal processing, military, surveillance, robotics, medicine, pattern recognition, face recognition, finger print recognition, finance and marketing, stock market, education, emerging applications, …).

(3) Machine Learning:

Statistical learning theory; Unsupervised and Supervised Learning; Multivariate analysis; Hierarchical learning models; Relational learning models; Bayesian methods; Meta learning; Stochastic optimization; Simulated annealing; Heuristic optimization techniques; Neural networks; Reinforcement learning; Multi-criteria reinforcement learning; General Learning models; Genetic algorithms; Multiple hypothesis testing; Decision making; Markov chain Monte Carlo (MCMC) methods; Non-parametric methods; Graphical models; Gaussian graphical models; Bayesian networks; Particle filter; Cross-Entropy method; Ant colony optimization; Time series prediction; Fuzzy logic and learning; Inductive learning and applications; Grammatical inference; Graph kernel and graph distance methods; Graph-based semi-supervised learning; Graph clustering; Graph learning based on graph transformations; Graph learning based on graph grammars; Graph learning based on graph matching; Information-theoretical approaches to graphs; Motif search; Network inference; Aspects of knowledge structures; Computational Intelligence; Knowledge acquisition and discovery techniques; Induction of document grammars; General Structure- based approaches in information retrieval, web authoring, information extraction, and web content mining; Latent semantic analysis; Aspects of natural language processing; Intelligent linguistic; Aspects of text technology; Biostatistics; High-throughput data analysis; Computational Neuroscience; and Computational Statistics.