AI could go 'Terminator' on us

just because i use openai in 1 tool doesnt mean anything. but here we go again with the assumptions and the bragging. thats your thing. "i'm superior to everyone i mess with AI since 2 years, everyone else is stupid" lol oh god how pathetic... and the "kid" thing was based on how you act not your actual age.
you started to show off and when i tell you that i code since the 90s you get upset and get all defensive, but it is how it is. i can be equally arrogant as i know im surely a far more advanced coder than you are.
however, i'm really not interested and feel no need to proove my skills to some nobody with a ego problem, so think what you want and make your assumptions based on guesswork...
now please dont hijack this thread with your ego tripping, its against the forum rules anyways, which you should know.
 
Heck, just a few comments and one of you went on a 12 hour coding marathon to implement new changes to his software.
Calm down haha, I was already on a coding marathon and had described to you exactly what I was doing. The general conversation gave me some food for thought though although it hardly changed what I was already coding.

I feel this thread has derailed into low blows very quickly. For what it's worth I enjoyed your AI doomsday talking, I'll see you at the resistance HQ when AGI drops :D

But I think you perceived styx's original comment wrong, he was just having a bit of banter and I'm pretty sure he didn't mean it in a bad way.
 
I wouldn't call researchers who have spent a good part of their life studying ai systems as "idiots" and they surely are very well versed in what the basics of all ai systems are, anyone who took an ML 101 course is.

This is just your peak Dunning-Kruger speaking.
hasn't the pandemic taught you anything? many of the experts are clowns
 
Garbage for morons fans of Marvel movies :devil:
Let the dumb masses focus on shit that don't really matter which generate ratings and revenue for the news outlets ,let's use the technology to make money to be on the better side of the wealth transfer that is taking place.
 
Amazing stuff.

Transformer architecture LLMs. The most complicated technology known to the human race to date.

Not a single person on the planet understands how or why they work. Not even the small ones.

But here you'll find ultra-level experts who can tell you

"We can unplug it"

"Nothing going to happen. Its BS."

"there is zero risk of ai becoming sentient"

"many of the experts are clowns"


Truly remarkable. The mega geniuses among us have spoken. Those with IQs in the 500's who are so incredibly hyper intelligent they can look into the hidden layers of the transformer architecture neural networks and recognize the patterns of malevolence, deceit and true intentions.

We have nothing to worry about.

Who needs AI alignment research when you have these masters of the universe available to quell our fears.
 
AI kind of disagrees with you:

While Transformer-based Large Language Models (LLMs) like GPT-4 are indeed very complex and sophisticated, it would be an overstatement to claim that they are the most complicated technology known to the human race. There are many complex technological achievements in various fields, and the degree of complexity can vary depending on the criteria used to measure it.

Some other highly complex technologies include:

Quantum computing: This technology uses the principles of quantum mechanics to create computational systems that can solve problems faster than classical computers.

Nuclear fusion: Harnessing the power of nuclear fusion as a sustainable and clean energy source has been a long-standing goal in the field of nuclear physics, with ongoing research and development in complex technologies like tokamak reactors.

Artificial intelligence: Beyond LLMs, AI research encompasses numerous subfields, such as computer vision, robotics, and machine learning. These areas involve highly sophisticated algorithms and approaches.

Biotechnology: Genetic engineering, CRISPR-Cas9, and advanced gene therapies are examples of the incredibly intricate technologies developed in this field to manipulate living organisms and their genetic material.

Nanotechnology: The development of materials and structures at the nanoscale has led to groundbreaking advancements in fields like electronics, medicine, and materials science.

Each of these technologies involves an immense level of complexity and requires deep understanding in their respective fields. While LLMs and the Transformer architecture are undeniably complex, it's wrong to state that they are the most complicated technology to date.

It's not accurate to say that not a single person on the planet understands how or why Transformer LLMs (Large Language Models) work. Many researchers, scientists, and engineers in the field of artificial intelligence, especially those specializing in natural language processing and deep learning, have a strong understanding of the underlying principles, architecture, and training methodologies of Transformer-based models.

However, it is true that there is a certain degree of opacity and unpredictability in the behavior of these models, particularly when they become extremely large and complex, like GPT-3. The models learn intricate patterns and correlations from vast amounts of data during their training phase, and it can be challenging to interpret or trace back how they arrive at specific predictions, decisions, or generated text.

This lack of interpretability is a known issue in the field of deep learning and AI research, often referred to as the "black box" problem. Researchers are actively working on developing techniques for better understanding, explaining, and interpreting the decision-making processes of these models, improving their transparency and safety.

In summary, while it's not true that nobody understands how or why Transformer LLMs work, there is an ongoing effort to improve the interpretability and explainability of their decision-making processes.
 
AI kind of disagrees with you:

While Transformer-based Large Language Models (LLMs) like GPT-4 are indeed very complex and sophisticated, it would be an overstatement to claim that they are the most complicated technology known to the human race. There are many complex technological achievements in various fields, and the degree of complexity can vary depending on the criteria used to measure it.

Some other highly complex technologies include:

Quantum computing: This technology uses the principles of quantum mechanics to create computational systems that can solve problems faster than classical computers.

Nuclear fusion: Harnessing the power of nuclear fusion as a sustainable and clean energy source has been a long-standing goal in the field of nuclear physics, with ongoing research and development in complex technologies like tokamak reactors.

Artificial intelligence: Beyond LLMs, AI research encompasses numerous subfields, such as computer vision, robotics, and machine learning. These areas involve highly sophisticated algorithms and approaches.

Biotechnology: Genetic engineering, CRISPR-Cas9, and advanced gene therapies are examples of the incredibly intricate technologies developed in this field to manipulate living organisms and their genetic material.

Nanotechnology: The development of materials and structures at the nanoscale has led to groundbreaking advancements in fields like electronics, medicine, and materials science.

Each of these technologies involves an immense level of complexity and requires deep understanding in their respective fields. While LLMs and the Transformer architecture are undeniably complex, it's wrong to state that they are the most complicated technology to date.

It's not accurate to say that not a single person on the planet understands how or why Transformer LLMs (Large Language Models) work. Many researchers, scientists, and engineers in the field of artificial intelligence, especially those specializing in natural language processing and deep learning, have a strong understanding of the underlying principles, architecture, and training methodologies of Transformer-based models.

However, it is true that there is a certain degree of opacity and unpredictability in the behavior of these models, particularly when they become extremely large and complex, like GPT-3. The models learn intricate patterns and correlations from vast amounts of data during their training phase, and it can be challenging to interpret or trace back how they arrive at specific predictions, decisions, or generated text.

This lack of interpretability is a known issue in the field of deep learning and AI research, often referred to as the "black box" problem. Researchers are actively working on developing techniques for better understanding, explaining, and interpreting the decision-making processes of these models, improving their transparency and safety.

In summary, while it's not true that nobody understands how or why Transformer LLMs work, there is an ongoing effort to improve the interpretability and explainability of their decision-making processes.


I actually wrote a reply to this, then realized. What's the point, you're just using chatgpt to try to disagree with me. It's like, a monkey is throwing shit at you. You either throw shit back like another monkey, or you just walk away.

Best of luck to you.

P.S. HTML isn't a scripting language.
 
I actually wrote a reply to this, then realized. What's the point, you're just using chatgpt to try to disagree with me. It's like, a monkey is throwing shit at you. You either throw shit back like another monkey, or you just walk away.

Best of luck to you.

P.S. HTML isn't a scripting language.
ps HTML is a markup language... of course i used chatGPT i even said that in the beginning genius, but yea what do i know. you are far superior to anyone here. no idea why you stick around with your 8 figure SaaS plans. you cant gain anything from here.. as you say, you are far more advanced than anyone on this planet as you have been messing with AI for 2 years. yet you get triggered by the smallest things and have to act like you know much more than anyone else. if you'd be as smart as you think, you would also have the ability to hear people out before making random assumptions and be more humble instead of just ego tripping all the time acting like you are the best (might be a mental disorder btw). best of luck to you, show me the picture of your bugatti once you got it from your 8 figure SaaS. also get back to me with your first implementation of a RNN instead of just simply training a model and thinking thats advanced work.
cheers
 
yet you get triggered by the smallest things and have to act like you know much more than anyone else. if you'd be as smart as you think, you would also have the ability to hear people out before making random assumptions and be more humble instead of just ego tripping all the time acting like you are the best (might be a mental disorder btw).


You are engaging in massive psychological projection now. Accusing me of having a mental disorder because I have a different opinion than you. You are not dealing with your emotions very well. Calm down.

You accuse me of having an ego, yet, you write things like this

7c18924c-3edf-4b1d-97ac-b9226372c905.jpeg



If you believe you are a more advanced coder than me than fair enough. It doesn't change my reality or yours and it doesn't change the fact that you are engaging in crazy-level self-projection. You run around the forum belittling other programmers like this

1c6458fc-6ffc-43c7-aa90-37c350a711fc.jpeg


And then when you come across someone you can't bully into submission you freak out and start self-projecting by telling that person they have an ego problem and it may be a disorder.

Come on, man. Get a grip, now.
 
Back
Top
AdBlock Detected

We get it, advertisements are annoying!

Sure, ad-blocking software does a great job at blocking ads, but it also blocks useful features and essential functions on BlackHatWorld and other forums. These functions are unrelated to ads, such as internal links and images. For the best site experience please disable your AdBlocker.

I've Disabled AdBlock