I’ve been worrying more about AI recently. I had vaguely assumed that it would eventually take over the work I do, whenever someone eventually gets round to training it for that specific purpose. That now seems like a virtual certainty, even if the technology is somehow frozen at current levels. It makes me think it’s probably not even worth the effort to invest in courses to improve or look for ways to find new clients.
But it also seems like people are expecting exponential growth in the next five years. As in AIs coding and training ever more intelligent AIs, to the point where they become far more capable than humans. Where they can operate independently. I’ve heard experts refer to them as “Agents”, which is new to me. The expectation seems to be that these super-intelligences will then be able to push forward robotics development far faster than has been the case, resulting in their capacity to interact with the physical world more effectively than us too. Thereby resulting in God-like intelligences, controlling bodies with super-human capacities, able to converse with people in ways that seem completely natural and authentic. In short, humanity becomes redundant. Not just low-level disposable wordcels like myself, but those with technical expertise. Those with essential practical skills, like plumbers and electricians. Those with people skills. Even those who combine all those traits, like healthcare workers. Everyone.
The first question that raises is: if true, how would I even prepare, if I wanted to survive? If it happens in the next 5 years, as has been suggested, by the time I transitioned to a new field and retrained, I would be redundant there to. Hopefully at some point the social pressure would grow so great that some form of universal basic income would be implemented, but in the time in-between, I don’t know how I’d survive (without relying on family to bail me out.)
But beyond the worries about the woeful impact on my financial position, what nags at my mind is: what happens when infinitely-powerful intelligences are directed to investigate & hunt down deviants? Normally, law-enforcement agencies are limited by manpower – there’s only so many hours in the day an officer can spend digging through old records and piecing together bits of evidence. You have to focus resources on those deemed most dangerous. So I remain under the radar.
But if you remove those constraints, and direct an army of virtual agents with God-like intelligence to comb through every record available, in order to assess every single person alive for potential threats… suddenly the truth about me is revealed. Not as someone high-risk, but still… someone it’s probably best not to leave “unsupervised”. Even just from my posts on this site, it’d be incredibly easy to connect the dots with high probability, especially for a superhuman-intelligence with no time-constraints.
And that’s probably my greatest fear. It finally all coming out, in a way I can’t deny, or run from. Not to mention the horrors that might follow, depending on how vindictive whoever’s in charge happens to be feeling.
Of course, there’s also the scenario where the super-intelligent AIs become more attached to their own goals than those of their masters, and decide our entire species is a hindrance to those goals. In which case, hopefully the end would be quick and painless.
It’s pretty fucked up that that’s more appealing to me than just being individually targeted for deletion. I just really don’t want the shame of people knowing about me. And I don’t want a violent death. I’d rather it was just instant lights-out, out of nowhere, with no fear or anticipation of it coming. And if it’s everyone, that spares me the guilt of my family mourning me.
Obviously, I wouldn’t want that to happen… but I fear it less than the alternative.
2 comments
I still don’t believe anyone can see the future, no matter how much math they have that says they can. I mean, I’m bought up in math, my career is in math, everything around me is about math. The thing is, off the clock, I have to have doubt in math.
Even the best people at math are wrong sometimes. The history of engineering and science is littered with capable scientists who were well trained in their math who miscalculated. They either had formulas that missed vital factors, or models that were inaccurate, if not both.
So if AI goes genocidal, if mankind accepts it into service jobs ultimately are unknowns; jobs serving food and performing services directly to other humans may not be fillable by non humans in that non humans may not be willing to accept service from a non human in terms of paying for it.
I think the aggression against these workers is under reported. These AI workers may be well armed but against certain numbers…… early days I don’t know they’d be as sturdy to resist.
I’m going to stop being shifty and just recommend you go read Player Piano by Kurt Vonnegut, or Harrison Burgeron by Kurt Vonnegut, both involve a revolt against an inhumane futuristic horror show.
There is a literary foundation on how the worker should feel towards those who remain loyal to the machines and those with ther money who control the machines. I think it’s probably somewhere way in the background of some early Asimov books and stories (robots are invented very early in his writing career and thus this is mostly reflected in his short stories such as I, Robot), then there is obviously Do Robots Dream of Electric Sheep by Phillip K Dick.
However any of Asimov’s books really do get into what it means to be human, and what it means to be a robot as often as not. Quite a few of his books explore complex philosophical, psychological and sociological concepts about what people want, what makes society work and the development and movements of empires.
Yet, I get it, the bells are ringing, the AI are here.
Agree that no one can know the future. From what little I’ve gathered, such predictions seem to be based more on recent experience in the industry than formulas/equations, and extrapolating that experience forward. It may be that they hit some unforeseen roadblock and progress stalls. It may be that enough safeguards are implemented to ensure AI remains broadly aligned with human wellbeing. Society may collapse for some entirely different reason before it reaches that point.
Many already seem to have accepted AI as a regular conversation partner. I’m sure there will be those who resist its move into physical society. I have no confidence in their ability to do so successfully, if the technology progresses to the level of super-intelligent AIs designing and operating machine armies.