★
DAON-PHOBLACHT
CHORCAÍ
Home
baile
Forums
fóraim
Tickets
ceol
Event Guide
Imeachtaí
Street Art
ealaíon sráide
Articles
ailt
Cork Slang
béarlagair
Contact
teagmháil
Shop
siopa
Articles
Cork Slang
Forums
Events
Shop
Search, boy
Order search results by
Date of last reply
Date thread created
Order search results by
Current events
Archive
Home
Forums
Forum list
Search forums
What's new
New posts
New profile posts
Latest activity
Members
Current visitors
New profile posts
Search profile posts
Log in
Register
What's new
Search
Search
Search titles only
By:
Forum list
Search forums
Menu
Log in
Register
Install the app
Install
Home
Forums
The Langers Forum
Elon Musk buys Twitter
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="How bad boy" data-source="post: 7215387" data-attributes="member: 3028"><p>I'm not an expert by any stretch of the imagination but I have been working daily with AI experts for almost a year now.</p><p></p><p>At the moment, it's still better thought of as Machine Learning. It's currently powerful but limited.</p><p></p><p>That said it's getting increasingly easier to do it, it's not easy but it's surprisingly straightforward to build systems that allow you to do the sort of experimentation to build better machine learning/AI models.</p><p></p><p>It's also very hard to get specific on what exactly is AI. Is it Large Language Models? Should there be regulations against researchers spinning up clusters to test and improve LLMs? How do you create tests for safety that are worth a damn? How would that interact with stuff like open source AI software projects like Kubeflow? So, so many questions on how to achieve that control. I'm honestly not sure it's actually possible now.</p><p></p><p>Musk is a bit two faced on this, one minute he's saying it should be regulated, another is looking to build an anti woke version of ChatGPT which presumably would have no problems being homophobic, racist, sexist, etc cause that seems to be what anti woke means...</p><p></p><p>There's a lot of unknowns here, it very much depends on how you use it. Musk seems happy enough to use unproven AI to drive cars in public streets, where it currently has a middling enough record safety-wise, so again, think tight legislation around the application to it to self driving is sensible but I'm sure he'd not like that. For chat bots? Not much damage they can do for now.</p><p></p><p>It's where they cross into physical control it's potentially reasonable, however it's really not easy to draw the line, especially with trends like TinyML and analogue compute. </p><p></p><p>Either way, I think Musk is not coherent on his thoughts on it, worrying about it's safety yet releasing poorly tested AI in safety critical applications which have provable weakness that have killed a decent number of people.</p></blockquote><p></p>
[QUOTE="How bad boy, post: 7215387, member: 3028"] I'm not an expert by any stretch of the imagination but I have been working daily with AI experts for almost a year now. At the moment, it's still better thought of as Machine Learning. It's currently powerful but limited. That said it's getting increasingly easier to do it, it's not easy but it's surprisingly straightforward to build systems that allow you to do the sort of experimentation to build better machine learning/AI models. It's also very hard to get specific on what exactly is AI. Is it Large Language Models? Should there be regulations against researchers spinning up clusters to test and improve LLMs? How do you create tests for safety that are worth a damn? How would that interact with stuff like open source AI software projects like Kubeflow? So, so many questions on how to achieve that control. I'm honestly not sure it's actually possible now. Musk is a bit two faced on this, one minute he's saying it should be regulated, another is looking to build an anti woke version of ChatGPT which presumably would have no problems being homophobic, racist, sexist, etc cause that seems to be what anti woke means... There's a lot of unknowns here, it very much depends on how you use it. Musk seems happy enough to use unproven AI to drive cars in public streets, where it currently has a middling enough record safety-wise, so again, think tight legislation around the application to it to self driving is sensible but I'm sure he'd not like that. For chat bots? Not much damage they can do for now. It's where they cross into physical control it's potentially reasonable, however it's really not easy to draw the line, especially with trends like TinyML and analogue compute. Either way, I think Musk is not coherent on his thoughts on it, worrying about it's safety yet releasing poorly tested AI in safety critical applications which have provable weakness that have killed a decent number of people. [/QUOTE]
Verification
Post reply
Home
Forums
The Langers Forum
Elon Musk buys Twitter
Top