Road to self-driving development



This content originally appeared on DEV Community and was authored by Tracy Gilmore

There have been years of investigation into automated systems for everything from aircraft, weapon systems and unmanned drones to automobiles. Despite the millions, if not billions, of dollars invested and claims made by some electric vehicle manufactures, the day of fully-automated (aka self-driving) cars is a long way off.

I would argue that handing over the responsibility for developing the systems on which our society (our civilisation) is so reliant, is an exceptionally dangerous strategy. Far more dangerous than streets full of driver-less vehicles, yet we seem to be determined to hand over this vital responsibility, according to some big businesses.

Call them Software Developers or Software Engineers, it makes little difference if the AI hype is to be believed and all our jobs are at risk in the next 2-5 years. I see some dangerous parallels between the rush to self-driving cars and the use of AI in the Software Development Life-Cycle.

Six levels of Automation

Several studies into automation have defined a 6-level scale of automation that I think might be applicable to the software industry’s adoption of AI and the risk it might present.

Synopsys, The 6 Levels of Vehicle Autonomy

In the context of software development the 6 levels can be aligned as follows:

Level 0: The developer codes everything using a text editor but not an Integrated Development Environment (IDE.)
Level 1: The IDE provides developers with functions such as auto complete and/or syntax highlighting.
Level 2: The IDE integrates features like linting and always running test execution.

Up to this level the developer is completely in control and decides how to use the information presented by the assistive features. They are responsible for determining what course of action should be taken.

Level 3: Current AI tools can perform analysis of the source code and can suggest improvements the developer can choose to accept or reject.
Level 4: When prompted by the developer, the AI tools can go beyond the current code base and draw on information from external resources, within the system domain (MCP) and outside (training material), to generate new code.

The next level is, in my uninformed opinion, the scary one, that potentially puts our society at risk. Cutting expensive human resources is an essential strategy in many businesses but, in this context, is the only cost the jobs of software developers?

Level 5: The responsibility for development of the system is completely controlled by the AI tools without any developer intervention. At this point we have no idea what the system is doing and cannot be absolutely sure AI has our best interest in “mind”, or whatever the AI equivalent of mind might be.

Should we really hand over this responsibility without question just because big business says “trust us”?

Resources

  1. Aviation Autonomy, Summary of the Consultation Paper, UK Law Commission: JARUS levels of automation, page 6.
  2. Synopsys, The 6 Levels of Vehicle Autonomy Explained
  3. NHTSA, Automated Vehicles for Safety
  4. Navigating the 5 Levels of Drone Autonomy: A Look at SUIND’s Approach to Autonomous Systems
  5. DroneLife: DRONEII: Tech Talk – Unraveling 5 Levels of Drone Autonomy


This content originally appeared on DEV Community and was authored by Tracy Gilmore