The AI Community On Kbin

https://www.openpetition.eu/petition/online/securing-our-digital-future-a-cern-for-open-source-large-scale-ai-research-and-its-safety

*Join us in our urgent mission to democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models. This monumental initiative will secure our technological independence, empower global innovation, and ensure safety, while safeguarding our democratic principles for generations to come.* In an era of unparalleled technological advancements, humanity stands on the precipic

3
0
https://www.bloomberg.com/news/articles/2024-03-23/startup-stability-ai-ceo-emad-mostaque-steps-down?cmpId=google

Stability AI Chief Executive Officer Emad Mostaque has resigned from the British artificial intelligence startup — a move that follows quarrels with investors and waves of senior staff departures.

5
1
https://trendvale.com/chat-evolution-top-chatgpt-alternatives-for-2024/

Numerous ChatGPT alternatives offer superior solutions for your business, addressing diverse needs such as marketing, sales, ideation, revision.

1
0
https://www.ediiie.com/blog/ai-in-manufacturing-applications-examples-benefits/

AI in manufacturing sees the convergence of smart computing and algorithms with tasks that dictate the intricacies of the work.

1
0
https://www.ediiie.com/blog/ai-in-telemedicine-how-it-is-revolutionizing-patient-care/

While telemedicine doesn’t encompass every facet of AI, the role of AI in telemedicine has notably expanded in the recent past. Let's understand its use cases and benefits.

3
0
https://abstrusegoose.com/

Sorry if this isn't relevant to the community, but couldn't think of anywhere better to post. I saw something curious in my RSS comics feed last night for the Abstruse Goose comic. The author is fairly prolific and used to post comics based on math, technology, etc. His site and archive of comics has now been replaced with a single cryptic message: "AGI will not be designed by humans. It will be evolved through relentless evolutionary computational processes designed by humans." Very curious! Anybody have any theories on what is going on? I can't imagine what his motivation might be :)

0
0
https://www.youtube.com/watch?v=UIZAiXYceBI

Gemini is our natively multimodal AI model capable of reasoning across text, images, audio, video and code. This video highlights some of our favorite intera...

1
1
https://www.ediiie.com/blog/generative-ai-use-cases-applications-and-tools/

Under generative AI, tools and programs are coded to use AI in order to craft new bits and types of content. Let's explore the benefits, uses cases, and application of generative AI in various industries.

1
0

Training AI models like GPT-3 on "A is B" statements fails to let them deduce "B is A" without further training, exhibiting a flaw in generalization. ([https://arxiv.org/pdf/2309.12288v1.pdf](https://arxiv.org/pdf/2309.12288v1.pdf)) **Ongoing Scaling Trends** * 10 years of remarkable increases in model scale and performance. * Expects next few years will make today's AI "pale in comparison." * Follows known patterns, not theoretical limits. **No Foreseeable Limits** * Skeptical of claims certain tasks are beyond large language models. * Fine-tuning and training adjustments can unlock new capabilities. * At least 3-4 more years of exponential growth expected. **Long-Term Uncertainty** * Can't precisely predict post-4-year trajectory. * But no evidence yet of diminishing returns limiting progress. * Rapid innovation makes it hard to forecast. TL;DR: Anthropic's CEO sees no impediments to AI systems continuing to rapidly scale up for at least the next several years, predicting ongoing exponential advances.

2
0

Paper: [https://arxiv.org/abs/2309.07124](https://arxiv.org/abs/2309.07124) Abstract: > > > Large language models (LLMs) often demonstrate inconsistencies with human preferences. Previous research gathered human preference data and then aligned the pre-trained models using reinforcement learning or instruction tuning, the so-called finetuning step. In contrast, aligning frozen LLMs without any extra data is more appealing. This work explores the potential of the latter setting. We discover that by integrating self-evaluation and rewind mechanisms, unaligned LLMs can directly produce responses consistent with human preferences via self-boosting. We introduce a novel inference method, Rewindable Auto-regressive INference (RAIN), that allows pre-trained LLMs to evaluate their own generation and use the evaluation results to guide backward rewind and forward generation for AI safety. Notably, RAIN operates without the need of extra data for model alignment and abstains from any training, gradient computation, or parameter updates; during the self-evaluation phase, the model receives guidance on which human preference to align with through a fixed-template prompt, eliminating the need to modify the initial prompt. Experimental results evaluated by GPT-4 and humans demonstrate the effectiveness of RAIN: on the HH dataset, RAIN improves the harmlessness rate of LLaMA 30B over vanilla inference from 82% to 97%, while maintaining the helpfulness rate. Under the leading adversarial attack llm-attacks on Vicuna 33B, RAIN establishes a new defense baseline by reducing the attack success rate from 94% to 19%. > > Source: [https://old.reddit.com/r/singularity/comments/16qdm0s/rain\_your\_language\_models\_can\_align\_themselves/](https://old.reddit.com/r/singularity/comments/16qdm0s/rain_your_language_models_can_align_themselves/)

1
0

[https://arxiv.org/abs/2309.11495](https://arxiv.org/abs/2309.11495) **Abstract** Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue in large language models. We study the ability of language models to deliberate on the responses they give in order to correct their mistakes. We develop the Chain-of-Verification (CoVe) method whereby the model first (i) drafts an initial response; then (ii) plans verification questions to fact-check its draft; (iii) answers those questions independently so the answers are not biased by other responses; and (iv) generates its final verified response. In experiments, we show CoVe decreases hallucinations across a variety of tasks, from list-based questions from Wikidata, closed book MultiSpanQA and longform text [https://i.imgur.com/TDXcdMI.jpeg](https://i.imgur.com/TDXcdMI.jpeg) [https://i.imgur.com/XfRVxJT.jpeg](https://i.imgur.com/XfRVxJT.jpeg) **Conclusion** We introduced Chain-of-Verification (CoVe), an approach to reduce hallucinations in a large language model by deliberating on its own responses and self-correcting them. In particular, we showed that models are able to answer verification questions with higher accuracy than when answering the original query by breaking down the verification into a set of simpler questions. Secondly, when answering the set of verification questions, we showed that controlling the attention of the model so that it cannot attend to its previous answers (factored CoVe) helps alleviate copying the same hallucinations. Overall, our method provides substantial performance gains over the original language model response just by asking the same model to deliberate on (verify) its answer. An obvious extension to our work is to equip CoVe with tool-use, e.g., to use retrieval augmentation in the verification execution step which would likely bring further gains. Source: [https://old.reddit.com/r/singularity/comments/16qcdsz/research\_paper\_meta\_chainofverification\_reduces/](https://old.reddit.com/r/singularity/comments/16qcdsz/research_paper_meta_chainofverification_reduces/)

2
0
https://www.youtube.com/watch?v=_t1GCQNUePU

Subscribe for good luck - Most Popular videos - [https://www.youtube.com/watch?v=72qmtr41iWU&list=UULP3MF3KCtKQkCudSvNpk3BOg#liminalspace](https://www.youtube.com/watch?v=72qmtr41iWU&list=UULP3MF3KCtKQkCudSvNpk3BOg#liminalspace) , [#ai](https://kbin.social/tag/ai) , [#aiart](https://kbin.social/tag/aiart) \_Arti...

3
0
benmyers.dev

cross-posted from: https://lemmy.ml/post/5325676 > The past few months have launched generative AI models into the public eye, and everyone seems to have a take on it. Generative AI models such as large language models (LLMs) and AI art generators consume vast amounts of aggregated content, determine similarities between that content, and, when prompted, produce statistically likely, plausible-seeming output. > > The current state of generative AI is environmentally disastrous and built on the backbone of labor exploitation, particularly in the global south. Large language models' disregard for the truth is, at this point, well-documented. > > Technology is not neutral. Leveraging and normalizing generative AI is not a neutral act.

1
0
https://www.youtube.com/watch?v=nlsYvLtu7-Y

Subscribe for good luck - Most Popular videos - [https://www.youtube.com/watch?v=72qmtr41iWU&list=UULP3MF3KCtKQkCudSvNpk3BOg#aimusic](https://www.youtube.com/watch?v=72qmtr41iWU&list=UULP3MF3KCtKQkCudSvNpk3BOg#aimusic) , [#ai](https://kbin.social/tag/ai) , [#aiart](https://kbin.social/tag/aiart) \_Artificia...

2
0
https://www.pcworld.com/article/2073592/googles-bard-ai-can-now-access-gmail-drive-docs-and-more.html

The latest updates to Google’s generative AI chat bot lets it dig through your personal email, documents, and more—so you can get things done faster.

1
0
https://www.cnbc.com/2023/09/19/nearly-half-of-ceos-believe-ai-could-replace-their-own-jobs-poll.html

Many American CEOs say they're worried about their workplace's lack of AI skills, a new survey of C-suite executives and workers found. Here's why.

3
0
https://www.newsweek.com/china-aims-replicate-human-brain-bid-dominate-global-ai-1825084

The pursuit of the most advanced AI—human-like artificial general intelligence—has prompted concerns among experts about potential dangers if it runs amok.

2
0
https://www.youtube.com/watch?v=4ar8kYTxstA

Subscribe for good luck - Most Popular videos - [https://www.youtube.com/watch?v=72qmtr41iWU&list=UULP3MF3KCtKQkCudSvNpk3BOg#chatgpt](https://www.youtube.com/watch?v=72qmtr41iWU&list=UULP3MF3KCtKQkCudSvNpk3BOg#chatgpt) , [#ai](https://kbin.social/tag/ai) , [#art\_Artificial](https://kbin.social/tag/art_Artificial) I...

3
0
https://www.pcworld.com/article/2064105/9-free-ai-tools-that-run-locally-on-the-pc.html

These clever AI tools can have a big impact by using elaborate models to tackle demanding tasks. The nine programs presented here have something in common besides AI: they are freely available.

3
0
https://www.youtube.com/watch?v=iS3UV5jNHfY

Subscribe for good luck - Most Popular videos - [https://www.youtube.com/watch?v=72qmtr41iWU&list=UULP3MF3KCtKQkCudSvNpk3BOg#superman](https://www.youtube.com/watch?v=72qmtr41iWU&list=UULP3MF3KCtKQkCudSvNpk3BOg#superman) , [#aiart](https://kbin.social/tag/aiart) , [#joker](https://kbin.social/tag/joker) \_Arti...

1
0
https://techcrunch.com/2023/09/15/answering-ais-biggest-questions-requires-an-interdisciplinary-approach/

Ethical AI requires a deep understanding of what there is, what we want, what we think we know, and how intelligence unfolds.

3
0
https://techcrunch.com/2023/09/14/microsoft-open-sources-evodiff-a-novel-protein-generating-ai/

Microsoft has open sourced EvoDiff, an AI system and framework that can generate proteins without needing a protein sequence.

2
0
https://techcrunch.com/2023/09/15/superorder-raises-10m-to-help-restaurants-maintain-their-online-presence/

Superorder, a startup developing a platform to help restaurants maintain their online presence, has raised $10 million in a funding round.

1
0
https://www.wired.com/story/teachers-are-going-all-in-on-generative-ai/

Surveys suggest teachers use generative AI more than students, to create lesson plans or more interesting word problems. Educators say it can save valuable time but must be used carefully.

2
0
https://www.reuters.com/technology/google-nears-release-ai-software-gemini-information-2023-09-15/

Alphabet's Google has given a small group of companies access to an early version of Gemini, its conversational artificial intelligence software, The Information reported on Thursday, citing people familiar with the matter.

2
0
https://www.npr.org/sections/health-shots/2023/09/16/1199924303/chatgpt-ai-medical-advice

In recent research AI has done a credible job at diagnosing health complaints. But should consumers trust unregulated bots with their health care? Doctors see trouble brewing.

2
0
https://www.artificialintelligence-news.com/2023/09/06/gitlab-developers-ai-essential-despite-concerns/

A survey by GitLab has shed light on the views of developers on the landscape of AI in software development.

2
0
https://www.artificialintelligence-news.com/2023/09/07/uk-ai-ecosystem-hit-2-4t-by-2027-third-global-race/

The UK's AI economy is soaring, valued at an impressive £1.36 trillion ($1.7 trillion) and showing no signs of slowing.

2
0