Artificial Intelligence Thread

koolkale

Member
In the industrial revolution we replaced our muscles with metal, with AI we will replace our minds with software. So far, we have created an artificial intelligence that is really good at learning specialized tasks. When AI and learn general, uncorrelated tasks is when I believe the AI boom will happen. In how many years do you think our minds will become obsolete, and what will be the point of us existing as outdated software/ hardware?
 
I think AI is very far from operating without humans, AI will get better but someone will still have to code it.
 
14249002:Dr_Richard_Hertz said:
unless they learn to code themselves...

At that point, would you even be able to distinguish them from humans? Might already exists.
 
Definitely headed that way. As soon as we get quantum computing hashed out, I'd imagine it would be a matter of years before we see AI with general intelligence...not sure what that will do to the human psyche considering that may be proof for or against human consciousness.

The thing that scares me about this is the same thing that is terrifying in retrospect looking back at the first tests of nuclear fission. We were mostly certain that it would be okay, but there was also the possibility that a nuclear test would tear apart the fabric of the universe based on some law of physics that we did not yet understand. There was a belief held by a few scientists at the time that it may just vaporize our atmosphere. These guys were handling extremely radioactive material and seeing how it reacted by poking and twisting it with a screw driver. Only to find out later that that kills you awfully quickly.

I guess what I'm getting at is that we don't know what we don't know about ai. Look up instrumental convergence and the paperclip problem. Even general ai designed for the simplest and least offensive tasks could quickly spiral out of control.
https://samharris.org/subscriber-extras/151-will-destroy-future/

Above is a great podcast with Nick Fostrom about AI and doomsday
 
14249006:eheath said:
At that point, would you even be able to distinguish them from humans? Might already exists.

What you just described is the Turing Test. Many systems can already accomplish this in a specific or limited context. Computers tend however to develop their own unique styles however when AI are applied. Looking at Chess, AI engines like stock fish or AlphaZero will come up with moves and sequences that no Grand Master ever could and the moves don’t appear to make sense until you’ve been utterly destroyed.
 
14249031:Lonely said:
Definitely headed that way. As soon as we get quantum computing hashed out, I'd imagine it would be a matter of years before we see AI with general intelligence...not sure what that will do to the human psyche considering that may be proof for or against human consciousness.

The thing that scares me about this is the same thing that is terrifying in retrospect looking back at the first tests of nuclear fission. We were mostly certain that it would be okay, but there was also the possibility that a nuclear test would tear apart the fabric of the universe based on some law of physics that we did not yet understand. There was a belief held by a few scientists at the time that it may just vaporize our atmosphere. These guys were handling extremely radioactive material and seeing how it reacted by poking and twisting it with a screw driver. Only to find out later that that kills you awfully quickly.

I guess what I'm getting at is that we don't know what we don't know about ai. Look up instrumental convergence and the paperclip problem. Even general ai designed for the simplest and least offensive tasks could quickly spiral out of control.
https://samharris.org/subscriber-extras/151-will-destroy-future/

Above is a great podcast with Nick Fostrom about AI and doomsday

First off, your understanding of the Manhattan project is pretty limited. It was thought that the first test could ignite the atmosphere, but the likelihood was about the same as the LHC at CERN creating a black hole in Switzerland. Also the effects of radiation were pretty well known at the time of the Manhattan project due to research by people like Madam Curie. The test with screwdrivers you are talking about was an a two time exercise in stupidity where some moron handled a core improperly and it went critical. The second time it happened, radiation was well enough understood that after the accident, Louis Sloutin was able to calculate that he would be the only one to die directly from exposure.

Also AI is pretty limited at the moment. The mathematics necessary to create their optimization algorithms are extremely complex and the networks themselves are still pretty limited in size by current time complexity.
 
14248974:eheath said:
I think AI is very far from operating without humans, AI will get better but someone will still have to code it.

We are about to max out computer chip transistor density. When that happens, computers will stop getting more powerful every year, and at that point computers will still be far less capable than the human brain.

right now the human brain is something like billions of billions (iirc) of times more powerful than the most powerful microprocessor today.

it will be a long way to go before a computer becomes so complex it can fool itself into believing its not a computer.
 
14250252:DolansLebensraum said:
We are about to max out computer chip transistor density. When that happens, computers will stop getting more powerful every year, and at that point computers will still be far less capable than the human brain.

right now the human brain is something like billions of billions (iirc) of times more powerful than the most powerful microprocessor today.

it will be a long way to go before a computer becomes so complex it can fool itself into believing its not a computer.

What about quantum computing?
 
14250339:DolansLebensraum said:
Im skeptical of the idea of programming atoms. It doesnt seem very feasible to me. But who knows maybe it is.

Like i know it has been done on small scales but i dont see how you could actually build a complicated and reliable system like you can with transistors.
 
Yall ever read superintelligence by Nick Bostrom? Fascinating book about the development and implications of generally intelligent AI.
 
14250353:DolansLebensraum said:
Like i know it has been done on small scales but i dont see how you could actually build a complicated and reliable system like you can with transistors.

yeah I agree. Quatum Computing is still just a buzzword. They have done some stuff in small lab settings, but the juicy stuff everyone is talking about is 10-15 years away.
 
14251242:koolkale said:
yeah I agree. Quatum Computing is still just a buzzword. They have done some stuff in small lab settings, but the juicy stuff everyone is talking about is 10-15 years away.

idk man, have you really looked at what our society has accomplished in the last 20 years? Its coming sooner than you think
 
Quantum computing takes a ridiculous amount of energy. It’s only viable in a lab setting. Still being in its nascent stage, there is not yet a benefit for society. Yes a lot has happened in the past 20 years, and a lot will happen in the next 10, but a commercial application of quantum computing is not one of those things. Some scientists say that there will be no use for it in our lifetimes. Stuff to look out for is deep learning and edge computing which have extreme demand in data analytics for automated insights, “what if analysis”, and real time analytics/ decision making.

14251304:eheath said:
idk man, have you really looked at what our society has accomplished in the last 20 years? Its coming sooner than you think
 
Back
Top