Machine Learning - Learning/Language Models
**Abstract** We evaluate the use of the open-source Llama-2 model for generating well-known, high-performance computing kernels (e.g., AXPY, GEMV, GEMM) on different parallel programming models and languages (e.g., C++: OpenMP, OpenMP Offload, OpenACC, CUDA, HIP; Fortran: OpenMP, OpenMP Offload, OpenACC; Python: numpy, Numba, pyCUDA, cuPy; and Julia: Threads, CUDA.jl, AMDGPU.jl). We built upon our previous work that is based on the OpenAI Codex, which is a descendant of GPT-3, to generate similar kernels with simple prompts via GitHub Copilot. Our goal is to compare the accuracy of Llama-2 and our original GPT-3 baseline by using a similar metric. Llama-2 has a simplified model that shows competitive or even superior accuracy. We also report on the differences between these foundational large language models as generative AI continues to redefine human-computer interactions. Overall, Copilot generates codes that are more reliable but less optimized, whereas codes generated by Llama-2 are less reliable but more optimized when correct.
Original (pay-walled): https://www.nytimes.com/2023/09/25/technology/chatgpt-rlhf-human-tutors.html
Original (pay-walled): https://www.wsj.com/tech/ai/meta-is-developing-a-new-more-powerful-ai-system-as-technology-race-escalates-decf9451
Corresponding arXiv preprint: https://arxiv.org/abs/2308.03762
Med-PaLM is a large language model (LLM) designed to provide high quality answers to medical questions. Med-PaLM harnesses the power of Google’s large language models, which we have aligned to the medical domain and evaluated using medical exams, medical research, and consumer queries. Our first version of Med-PaLM, preprinted in late 2022 and published in Nature in July 2023, was the first AI system to surpass the pass mark on US Medical License Exam (USMLE) style questions. Med-PaLM also generates accurate, helpful long-form answers to consumer health questions, as judged by panels of physicians and users. We introduced our latest model, Med-PaLM 2, at Google Health’s annual health event The Check Up, in March, 2023. Med-PaLM 2 achieves an accuracy of 86.5% on USMLE-style questions, a 19% leap over our own state of the art results from Med-PaLM. According to physicians, the model's long-form answers to consumer medical questions improved substantially. In the coming months, Med-PaLM 2 will also be made available to a select group of Google Cloud customers for limited testing, to explore use cases and share feedback, as we investigate safe, responsible, and meaningful ways to use this technology.
Nous-Hermes-Llama2-13b is currently the highest ranked 13B LLaMA finetune on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). **Model Description** Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable. This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine. **Announcements** * https://twitter.com/NousResearch/status/1682458324804009987 * https://twitter.com/Teknium1/status/1682459395853279232
cross-posted from: https://lemmy.world/post/1954892 > It's looking really good! Major features include controlnet, support for SDXL, and a whole bunch of other cool things. > > Download: https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0
cross-posted from: https://lemmy.fmhy.ml/post/649641 > We could have AI models in a couple years that hold the entire internet in their context window.
[github repo](https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/PRO) [paper](https://lemmy.intai.tech/post/47230)
Docs: https://phasellm.com/docs/phasellm/eval.html This project provides a unified framework to test generative language models on a large number of different evaluation tasks. # Features: - 200+ tasks implemented. See the task-table for a complete list. - Support for models loaded via transformers (including quantization via AutoGPTQ), - GPT-NeoX, and Megatron-DeepSpeed, with a flexible tokenization-agnostic interface. - Support for commercial APIs including OpenAI, goose.ai, and TextSynth. - Support for evaluation on adapters (e.g. LoRa) supported in HuggingFace's PEFT library. - Evaluating with publicly available prompts ensures reproducibility and comparability between papers. - Task versioning to ensure reproducibility when tasks are updated.
Model Description Redmond-Hermes-Coder 15B is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. This model was trained with a WizardCoder base, which itself uses a StarCoder base model. The model is truly great at code, but, it does come with a tradeoff though. While far better at code than the original Nous-Hermes built on Llama, it is worse than WizardCoder at pure code benchmarks, like HumanEval. It comes in at 39% on HumanEval, with WizardCoder at 57%. This is a preliminary experiment, and we are exploring improvements now. However, it does seem better at non-code than WizardCoder on a variety of things, including writing tasks. Model Training The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions. Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' (v1) GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions.
## Models - [opnechat](https://huggingface.co/openchat/openchat) - [openchat_8192](https://huggingface.co/openchat/openchat_8192) - [opencoderplus](https://huggingface.co/openchat/opencoderplus) ## Datasets - [openchat_sharegpt4_dataset](https://lemmy.intai.tech/post/40692) ## Repos - [openchat](https://github.com/imoneoi/openchat) ## Related Papers - [LIMA Less is More For Alignment](https://lemmy.intai.tech/post/10277) - [ORCA](https://lemmy.intai.tech/post/650) ### Credit: [Tweet](https://twitter.com/Yampeleg/status/1675165254144126978) ### Archive: @Yampeleg The first model to beat 100% of ChatGPT-3.5 Available on Huggingface 🔥 OpenChat_8192 🔥 105.7% of ChatGPT (Vicuna GPT-4 Benchmark) Less than a month ago the world witnessed as ORCA [1] became the first model to ever outpace ChatGPT on Vicuna's benchmark. Today, the race to replicate these results open-source comes to an end. Minutes ago OpenChat scored 105.7% of ChatGPT. But wait! There is more! Not only OpenChat beated Vicuna's benchmark, it did so pulling off a LIMA [2] move! Training was done using 6K GPT-4 conversations out of the ~90K ShareGPT conversations. The model comes in three versions: the basic OpenChat model, OpenChat-8192 and OpenCoderPlus (Code generation: 102.5% ChatGPT) This is a significant achievement considering that it's the first (released) open-source model to surpass the Vicuna benchmark. 🎉🎉 - OpenChat: https://huggingface.co/openchat/openchat - OpenChat_8192: https://huggingface.co/openchat/openchat_8192 (best chat) - OpenCoderPlus: https://huggingface.co/openchat/opencoderplus (best coder) - Dataset: https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset - Code: https://github.com/imoneoi/openchat Congratulations to the authors!! --- [1] - Orca: The first model to cross 100% of ChatGPT: https://arxiv.org/pdf/2306.02707.pdf [2] - LIMA: Less Is More for Alignment - TL;DR: Using small number of VERY high quality samples (1000 in the paper) can be as powerful as much larger datasets: https://arxiv.org/pdf/2305.11206
- [Paper](https://arxiv.org/pdf/2306.15794.pdf) - [Models](https://huggingface.co/LongSafari) - [Github](https://github.com/HazyResearch/hyena-dna) - [Collab](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing)
https://docs.google.com/spreadsheets/d/1kT4or6b0Fedd-W_jMwYpb63e1ZR3aePczz3zlbJW-Y4/edit?usp=sharing
https://huggingface.co/docs/transformers/model_doc/blip
- [blog post](https://blog.salesforceairesearch.com/xgen/) - [github repo](https://github.com/salesforce/xgen)