Subtotal: $00.00

Checkout

Your cart is empty

The Warped Recipes of Algorithms

Just as leaving out key ingredients and some essential steps might result in a delicious chocolate cake or a complete failure, an algorithm entails step-by-step sequences of operations that solve particular computational tasksand might lead to success or failure.

Clearly, algorithms can help solve much more complex tasks than those similar to baking cakes, and they can also come with much more complex side effects. Recipes do not tend to discriminate, but algorithms might. People can actually be discriminated against by the automated decisions made by an algorithm, perhaps when applying for a loan or a job. The warped recipes that some algorithms are may also have consequences for development. Promises of a better future need to be accompanied, or preceded, by promises of better algorithmic recipes.

Machine learning systems are a popular family of algorithms, which learn from data, and encode the patterns they find into structure models, such as rule systems, decision trees, or neural networks. Algorithms can be combined in creative ways and interact so closely and cleverly with human decisions and behaviour that these might even be called “algorithmic systems”.

An advanced algorithm can gather and process information, interact with other algorithms, and arrive at a certain conclusion. The conclusion may be a list of classifications and what they mean, such as when an image-recognition algorithm guesses whether an image is showing a chair or a car.

Photo courtesy of Nasa

In other instances, algorithmic systems have more important repercussions. For example, judges in the United States use simple but proprietary machine-learning algorithms to assess whether a criminal offender is likely to commit a similar crime again. Algorithmic systems seem to reduce complexity and simplify human decision making by “basing decisions on the data”. But these algorithmic systems may fail to correctly predict who will be a repeat offenderand so far they seem at times to err on the side of racial profiling: minorities tend to be treated as higher risk. The bias stems from a history of racial bias in sentencing and arrests, resulting in skewed data that are fed into algorithmic systems.

Algorithms like these now influence many aspects of modern life in almost invisible waysat least until they fail. In the past several years, the risk of discrimination and hidden biases embedded in algorithmic systems has triggered considerable debate, at least among those interested in the social dimensions of the unfolding algorithmic revolution. Examples include not only the use of machine learning that incorrectly assesses the likelihood of minorities committing a future crime: investigations have shown algorithmic systems also discriminate against citizens by providing unjustifiably low credit scores, denying healthcare for disabled people based on faulty historical data, and weeding out CVs in the recruitment process in a discriminatory way.

The key issue here is biases created by data. That is, if credit ratings or crime data entail biases towards certain groups in society, or omit them entirelysay, minorities in the United States or people from particular regions in a countrythen algorithmic systems may propagate these patterns, and lead to discriminatory decisions. As some researchers have noted, “decisions produced by the algorithms are as good as the data upon which such decisions are computed and the humans and systems operating them”.

Increasing relevance

These concerns are relevant in two arenas that might be surprising to some: in development issues and for biosphere-based sustainability.

Photo courtesy of Nasa

Algorithmic systems are already a fundamental part of the way we perceive, modify, and respond to the natural world around us. Researchers, policymakers, and practitioners make use of algorithms and their results in, for example, climate-change modelling, landscape planning, and fish stock assessments. Businesses employ image-processing algorithms to assess the presence of gold ores; 3D object recognition algorithms support deep-sea mining of rare earth minerals; algorithmic systems used in agriculture analyse weather and soil data to maximise production. And these are only a few examples of the many algorithms used across diverse sections of our societies.

Assuming that all of these applications are or will remain flawless in the face of changing social and ecological circumstances is unwise. If we look closely, some of their shortcomings can be detected already.

An interesting example of the close interplay between algorithmic systems and the way people perceive and respond to environmental change is the application of REDD+ schemes in Indonesia to reduce emissions from deforestation and forest degradation, set up through the United Nations to reduce deforestation by offering economic compensation. As Robert M. Ochieng explains in his PhD thesis, the important monitoring, reporting, and verification systems supporting REDD+ schemes rely heavily on algorithms and data.4 These underpin estimates of carbon mitigation metrics, and in the end they determine the resulting economic compensation to a country.

The warped recipes that some algorithms are may also have consequences for development.

Photo courtesy of Nasa

During the system’s implementation in Indonesia, national stakeholders forcefully questioned the planned forest monitoring system. The reason was that the algorithms and assumptions embedded in the monitoring system were based on the ecological understanding of how an Australian forest works, rather than an Indonesian forest, and lacked the transparency they expected. While this particular issue has been resolved, it points to the importance of recognising that algorithmic systems are embedded in socio-political and ecological contexts, and have considerable influence over decisions important for biosphere-based sustainability and development.

Algorithms should not be allowed to fail quietly. For example, data indicating the hole in the ozone layer was overlooked for almost a decade before it was discovered in the mid-1980s. The extremely low ozone concentrations recorded by the monitoring satellites were treated as outliers by the algorithms and therefore discarded, which delayed our response by a decade to one of the most serious environmental crises in human history. Diversity and redundancy helped discover the error (see the Biosphere Code Manifesto, principle 5).

As algorithmic systems continue to be used in our interactions with the biosphere in agriculture, forestry, fishing, and morethey should also be sensitive to an increased understanding of how ecosystems and the biosphere operate in the face of complexity, surprise, and change. Such algorithmic systems should not aim only at enhancing efficiency in resource extraction, for example, by maximising biomass production in forestry, agriculture, and fisheries. They should build on resilience principles and encourage learning, diversity, and redundancy.

A call for transparency

Algorithmic systems are becoming increasingly sophisticated and effective, through the application of machine learning and deep neural networks sometimes captured under the term “artificial intelligence”. They are also clearly finding an ever-growing universe of applications in sectors critical for biosphere stewardship and development.

A marketplace for predictive agriculture algorithms called PrecisionHawk now thrives. Here you can buy services based on the integration and processing of large datasets, or “big data”, that allow the user to optimise urban planning, large-scale fishing strategies, or agricultural investments, in near real-time; examples include Descartes Lab, DigitalGlobe and Orbital Insight.

Industrial-scale reforestation services can now use very large sets of real-time data to “create an optimised planting pattern”, for example with DroneSeed. There is also a growing community exploring “AI-D” or “AIForAll”artificial intelligence for development with a special focus on the world’s most vulnerable communities. These are just a few examples.

→ Read more as experts debate the pros and cons of the Algorithm Age

This piece was originally published in Rethink, resilience thinking for global development. The text is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.” Photos courtesy of NASA.

Read More