A computerized twin is a duplicate of an individual, item, or interaction that is made utilizing information. It may sound like science fiction, but some have claimed that you could potentially have a digital double in the next decade.
As a replica of a person, a digital twin will – ideally – make the same decisions you would if you were presented with the same content.
This sounds like another speculative claim by futurists. But it is more possible than people might like to believe.
While we may assume that we are special and unique, with enough information, artificial intelligence (AI) can make many assumptions about our personalities, social behavior, and purchasing decisions.
The age of big data means that vast amounts of information (known as “data leaks”) are collected about your outward behaviors and preferences, as well as the behavioral traces you leave behind.
Equally contentious is the extent to which organizations collect our data. In 2019, The Walt Disney Company acquired Hulu, a company that journalists and lawyers pointed out had a questionable record when it came to data collection.
Seemingly innocuous phone applications — like those used to order coffee — can collect substantial amounts from users every few minutes.
The Cambridge Analytica scandal illustrates these concerns, with consumers and regulators concerned about the potential to identify, predict and change one’s own behavior.
But how concerned should we be?
High versus low fidelity
In simulation studies, fidelity refers to how close a copy, or model, is to its target. Simulator fidelity refers to the degree of realism to which a simulation is made with respect to the real world. For example, a racing video game provides an image that increases and decreases speed as keys on a keyboard or controller are pressed.
A digital twin requires a high degree of fidelity capable of incorporating real-time, real-world information: if it’s raining outside right now, it will rain in the simulator.
Asteroids We’ve Never Seen Before Lurk Hidden in The Glare of The Sun
In industry, digital twins can have fundamental implications. If we are able to model human-machine interaction systems, we have the ability to allocate resources, predict shortages and failures, and make estimates.
A human digital twin will incorporate a large amount of data about a person’s preferences, biases and behaviors, and will be able to draw on information about the user’s immediate physical and social environment to make predictions.
These requirements mean that achieving a true digital twin is a distant possibility for the foreseeable future. The amount of sensors required to collect the data and processing capacity necessary to maintain a virtual model of the user would be enormous. At present, developers settle for a lower fidelity model.
Ethical issues
English Prime Minister Benjamin Disraeli is frequently cited as saying, “There are three sorts of falsehoods: lies, accursed untruths, and measurements,” suggesting that numbers can’t be relied upon.
Data collected about us relies on collecting and analyzing data about our behaviors and habits to predict how we will behave in given situations.
This sentiment reflects a misunderstanding of how statisticians collect and interpret data, but it raises an important concern.
One of the most important ethical problems with the digital twin concerns the quantitative fallacy, which assumes that numbers have an objective meaning that is separate from their context.
When we look at numbers, we often forget that they have specific meanings that come from the measuring devices used to collect them. And a measurement tool may work in one context but not in another.
When collecting and using data, we must recognize that selection involves some characteristics and not others. Often, this choice is made out of convenience or due to practical limitations of the technology.
We must criticize any claims based on data and artificial intelligence because design decisions are not available to us. We must understand how data is collected, processed, used and presented.
Imbalance of power
The imbalance of power is a growing public debate over data, privacy and surveillance.
On a smaller scale, it can create or exacerbate the digital divide – the gap between those who do and those who do not have access to digital technologies. Largely, it threatens a new colonialism based on access and control of information and technology.
Even the creation of a less sincere digital twin offers opportunities to monitor users, predict their behavior, try to influence them, and represent them to others.
While this may help in healthcare or education settings, failing to give users access to and the ability to assess their data can threaten individual autonomy and the collective good of society.
Information subjects don’t approach similar assets as enormous organizations and states.Lack of time, training, and perhaps motivation. Constant and independent oversight is needed to ensure the protection of our digital rights. Conversation