"I'm not a pessimist." Sam Altman on white-collar jobs, AI autocracy, and China's hegemony
The head of OpenAI is sure that even artificial superintelligence will not completely replace human labor

Sam Altman argues that no one is going on indefinite leave because of AI / Photo: Shutterstock.com
Jobs will not disappear completely due to technological advancements, said OpenAI co-founder Sam Altman during an interview with The Indian Express on the sidelines of AI Impact Summit. Oninvest collected his statements on how AI will affect the labor market, whether superintelligence is a threat to humanity and whether it is possible that Chinese humanoid robots will take over the US.
When will superintelligence
Artificial Superintelligence (ASI) is a few years away. With the way AI is accelerating ourselves, I think superintelligence is just around the corner. And universal AI (AGI) seems quite close.
A year ago, the AI was solving math problems at the high school level, and people marveled at such abilities of an "eleventh grader". Last summer, it was already performing well at the world's most difficult Olympiads. Last week, in the First Proof project, our model correctly solved seven out of ten new research problems, the answers to which did not exist before. In just a year, AI has gone from school success to the ability to make discoveries at an academic level. This is an amazing leap: we've gone from the capabilities of a capable schoolchild to pushing the very boundaries of human cognition.
How AI will affect the labor market
It is pointless to deny the scale of change: because of the inertia of society, it will take time, but it will be enormous. As with any progress, we will find new things to do. I am not pessimistic - there will be plenty of work to do. Previous revolutions have also caused panic, but invariably created jobs. The idea that we will all go on indefinite vacation has never been confirmed.
Many professions in their current form will disappear, but people will adapt and find new occupations. The way of writing code that I learned is actually no longer relevant. It doesn't mean that work for us will disappear, but manual C++ code writing is over. This will be the case in many areas: old formats will give way to something new. But such skills as mastery of AI, resilience, adaptability, the ability to understand what people want, how to be useful to them, and the ability to work in a team will always be relevant.
And the person will remain important after all. I recently had to go to a hospital where I was cared for by a nurse. If a robot had done that, I would have been very unhappy, no matter how smart it was.
AI: autocracy or democracy?
The IT industry started from a very libertarian position: "We don't need the government and the government doesn't need us". But now I think government involvement is important for infrastructure development. Given that we need to democratize this technology, governments will have to participate, and companies like ours will have to cooperate with them.
There should not be one superintelligence in the world. No one person, company, or country, including the United States, should have sole control of it. The world functions best with a broad distribution of power and a balance of Xi that allows us to control each other. In such an environment, people will eventually choose the best ideas and they will become dominant. The world should not be allowed to be run by a single AI, no matter whose hands it is in.
You can imagine a world where AI concentrates power colossally, and one company or country uses technology to accumulate influence and wealth. You can imagine superintelligence at everyone's fingertips, no rules, and chaos. There are many options in between these points. Personally, I think a more democratized version is a good thing. We will need regulation and protective barriers, but I believe putting this technology in the hands of many people is possible. This will lead to a decentralization of economic power.
I think there is no country that is on the right track in terms of regulating AI in general. Different countries are trying different approaches, and that is the advantage of a world with many sovereign nations. In the coming years, we'll be watching a variety of different strategies and we'll see what works and what doesn't. I believe the world will scale best practices quite quickly.
Has China already overtaken everyone in the AI field?
China is way ahead in some areas and not at all in others. Sounds trivial, I agree. They are leaders in robot manufacturing due to an advantage in electric motors and magnets. They're also ahead in energy deployment. But there are areas where we are ahead. It's hard to lead or lag on all fronts at once. Unless you have the only superintelligence in the world, then it is possible.
[The imaginary fear is that the Chinese military will send a billion humanoid robots that will march through the streets and crush the U.S. military. The real fear, though, is a new kind of war being waged on the Internet through influencing people, through hacking critical infrastructure. It seems entirely possible.
About "one of life's dumbest decisions."
[Why did I decide not to take a stake in OpenAI?] We were a non-profit organization, and the rules were that you had to be a "disinterested person" to be on the board. By that point, I was already fortunate in my career and thought, "I don't care." It was one of the dumbest decisions of my life, and I've made a lot of them.
What's something Altman would never ask ChatGPT?
I guess I would never ask him how to be happy. For that, I'd rather turn to a wise man. I'm not ready to accept ChatGPT's philosophy of life.
This article was AI-translated and verified by a human editor
