Anthony Scriffignano, PhD., SVP & Chief Data Scientist, Dun & Bradstreet [NYSE:DNB]
Our modern times see us surrounded by technology and technology-enabled services designed to make life easier and more productive. New trends, such as vast improvements in Artificial Intelligence, and the Internet of Things, add to the integrated, intuitive nature of life. Technology is better able to predict, advise, and in some cases intervene to make things safer and more useful. Nevertheless, we would be naive to assume that all of this advancement comes without challenge to the infrastructure which drives it forward. One of the greatest pressure points (and opportunities) is automation testing for software engineering.
Clearly, there are established disciplines around critically important aspects of testing, such as regression analysis (making sure everything else still works), load performance (making sure what we are building works at scale), and integration (making sure what we build works in the end-to-end environment). The disciplines in the field are well established, and supported by a growing, multibillion dollar industry. There are, however, some subtle changes creeping up on the worlds of technology that are worthy of consideration.
• Human Interaction: There are a number of testing approaches designed to mimic the behavior of humans. At the same time, human interaction is being subtly influenced by our technology environment. In the past, we would browse, fill out forms, make selections, and otherwise interact in very well-understood ways. Today, there are digital agents which can act on our behalf (bots), as well as cognitive systems designed to read things for us and make recommendations even before we begin to directly interact with an environment.
We must consider that our systems and processes, designed to interact with humans, will increasingly be interacting with a hybrid of humans and digital human-agent proxies.
Technology is better able to predict, advise, and in some cases intervene to make things safer and more useful
• Assumptions of Stability: Many methods, regression in particular, work from the premise that we can look at a stable set of past data to project how some new enhancement will function. The preconditions for such methods center around stability. Our digital environments are slowly becoming intentionally unstable in certain ways. For example, autonomous devices include goal modification modalities if the environment changes in unpredicted ways. These devices do not have the ability to “check in” for new instructions.
We should be careful to consider how new environments might impact the sort of testing required to anticipate changes in environments and automated reactions to those changes.
• Malfeasance: There are many recent examples of technology and systems “misbehaving” because they were used in unintended ways. Sometimes, these unintended uses are benign, such as when users find that a piece of functionality serves another purpose (e.g. using data from home automation systems do design better security systems). While such unintended use should be considered, there is arguably substantially more risk from unintended malfeasant use. Consider, for example, a system which records data for factory automation in a way that the data can be compromised to reverse-engineer intellectual property. On a more human scale, consider autonomous biomedical devices which can be externally configured in unintended ways.
The science of considering use cases for failure and adverse conditions must continue to keep pace with the changing behavior of cyber criminals and others who intend to use systems and processes for their own intent.
• System Learning/Making New Mistakes: Development is becoming increasingly agile, and developers are increasingly mobile. There is growing use of shared methods through open source initiatives. There is a huge opportunity for automation testing to increase our ability to sustain captured learnings. For example, in the future, we may better use AI methodologies to predict the types of failure that may emerge in advance, leading to development methodologies that are more anticipatory. This sort of “self-healing” development mindset has been around for some time (for example, instituting knowledge management systems for developers), but new capabilities to capture, retain, and synthesize massive amounts of dynamic data bring about exciting new possibilities.
We must learn from our mistakes, but also learn from how we react to failures, better anticipating required shifts in training, methods and tools of the future.
We are indeed at the cusp of a new era in technology development. There is an enormous, and increasing expectation placed on the speed and quality of development. The cost of failure is no longer at the system level, because more and more everything is connected to everything. We live in exciting times indeed for those who continue to advance our capabilities to test, to improve, and to proactively influence the march of progress.