The butterfly effect of new technology: How innovation could lead to worrying consequences
Businesses need to be aware of the unintended knock-on effect of exciting new tech.
When Mark Zuckerberg set up ''Thefacebook'' as a tool for connecting his Harvard housemates, the prospect of global influence and the potential to swing elections probably felt like a stretch – even for the ambitious young entrepreneur.
Zuckerberg has a huge opportunity on his hands, as Facebook's role as a socio-political force has now become undeniable. But he's not the only tech leader waking up to the unintended consequences of their innovations.
As businesses from all corners of the globe converge on world-changing technologies, it's easy to see and be excited by the huge benefits that they could bring.
But the time is right for all of us – including users, developers, investors and policy-makers – to take notice of the 'butterfly effects' of new tech.
Some of the biggest ripples of change are spreading beyond immediate industry challenges into ethical, legal and social spheres, fundamentally changing our world.
Autonomy and its potential pitfalls
Heavy investment from major auto companies is accelerating automated vehicle development, with many industry leaders predicting driverless cars in the mainstream by 2019.
The advances are incredibly attractive, of course, for convenience and efficiency that should result.
But safety is perhaps the most convincing argument for getting more automated vehicles on our roads, with 90% of vehicle accidents attributed to human error. Insurance premiums should drop off drastically, DUI's could be a thing of the past, and many terrible road accidents can be avoided.
By the same token, however, more lives could be at risk: with one in five organ donors coming from road traffic accidents in the US, safer driving is likely to exacerbate the ever-growing waiting list for donors and put pressure on health services.
It's a morbid prospect, but this is the essence of an unintended consequence – the impact of new technology that reaches farther than many of us realise at first. It's proof that industries seldom exist in a bubble, and that successes or failures in one field have many repercussions elsewhere.
The past few months have shown us more examples of this phenomenon in action.
Social sharing and offline exposure
Services such as Airbnb provide an open platform between renters and owners to book and host with minimal disruption. However, the service has since been blamed for a string of home robberies by ''guests'' who took advantage of an empty home. Geo-tagged social updates have also been touted as a tool for tech-savvy burglars.
Even Pokémon Go was overshadowed somewhat by unintended consequences. As groups targeted Pokémon hotspots and placed lures (in-game Pokémon beacons) in opportune locations, some players were subsequently ambushed and had their smartphones stolen when they visited them.
User education is a critical part of overcoming these incidents, because we rarely recognise our own exposure to hidden risks until it's too late.
Sometimes even the experts get caught out.
When Twitter account @MalWareTechBlog became the accidental hero of the recent WannaCry ransomware scenario, he soon found all of his personal details posted online thanks to journalists digging into his online past. That's very well for crediting the right person, but as a result the blogger could become a target for the cyber-criminals he was trying to combat.
So-called ''doxing'' attacks – exposure of personal information such as real name, address, phone and email – are now relatively common. And in a culture of instant online celebrity, it's an issue that could effect anybody.
Digital data trails
What we don't often realise is that our regular use of social media, websites and apps create thousands of data points every day. These are gathered and stored as a means of understanding our preferences and aiding convenience online, but this transaction of data for online services is all too often left in the dark.
It's a situation that users are waking up to in the climate of data hacking and leaks, which show no signs of slowing down.
However, with driverless cars, drones and artificial assistants all looking likely to become a feature of our daily lives, our digital footprints may soon become enormous, complex and harder for us to manage alone.
The time is therefore ripe for businesses, government and individuals to demand a focus on user safety by these emerging fields, including better user understanding of risk, more transparent risk analyses by developers and updated policy to govern it all.
Sci-fi policy
We've already seen the US government publish its plan for Artificial Intelligence's future with key guideline principles that echo Asimov's Laws: AI should augment reality, not replace it, AI should be ethical and everyone should have equal opportunity to develop AI systems.
The UK's Science and Technology Committee released a report in October 2016, calling for a national strategy on robotics and AI and their ethical, legal and social impacts. Since then, the government has announced a reported £17million investment in the industry, including areas such as security.
In Europe, MEPs have called for the adoption of comprehensive rules for how humans will interact with artificial intelligence and robots, including robots' legal status and if a ''kill switch'' is required.
But businesses should be proactive in solving the issues that arise from their innovations, as with social media and search engines which are continually tackling fake news and trolls on their services.
There is clearly an obligation for companies to do due diligence on the worst-case scenarios arising from their new technology.
Even though platforms or features offer great convenience or new experiences, safety and privacy should be the top priorities: what does app X offer its users, and what could the risk be? Does one out-weigh the other? What happens when user data is part of the ''system'' and can they opt out?
It's far from an impossible task – recent history should be proof enough that broader understanding of social and behavioural context can keep users and business safe while technology progresses apace.
Mark Curtis is founder of service design agency Fjord.
© Copyright IBTimes 2024. All rights reserved.