Prompt Engineering Tips, a Neural Network How-To, and Other Recent Must-Reads

We’ve been feeling a nice jolt of energy in the past month, as many of our authors switched gears from summer mode into fall, with a renewed focus on learning, experimenting, and launching new projects.

We’ve published far more excellent posts in September than we could ever highlight here, but we still wanted to make sure you don’t miss some of our recent standouts. Below are ten articles that resonated strongly with our community—whether it’s by the sheer number of readers they attracted, the lively conversations they inspired, or the cutting-edge topics they covered. We’re sure you’ll enjoy exploring them.

Photo by Daria Volkova on Unsplash
  • How to Design a Roadmap for a Machine Learning Project
    For those of you who are already well into your ML journey, Heather Couture’s new article offers a helpful framework for streamlining the design of your next project. From a robust literature review to post-deployment maintenance, it covers all the bases for a successful, iterative workflow.
  • Machine Learning’s Public Perception Problem
    In a thought-provoking reflection, Stephanie Kirmer tackles a fundamental tension in the current debates around AI: “all our work in the service of building more and more advanced machine learning is limited in its possibility not by the number of GPUs we can get our hands on but by our capacity to explain what we build and educate the public on what it means and how to use it.”
  • How to Build an LLM from Scratch
    Taking a cue from the development process of models like GPT-3 and Falcon, Shawhin Talebi reviews the key aspects of creating a foundation LLM. Even if you’re not planning to train the next Llama anytime soon, it’s valuable to understand the practical considerations that go into such a massive undertaking.
  • Your Own Personal ChatGPT
    If you are in the mood for building and tinkering with language models, however, a great place to start is Robert A. Gonsalves’s detailed overview of what it takes to fine-tune OpenAI’s GPT-3.5 Turbo model to perform new tasks using your own custom data.
  • How to Build a Multi-GPU System for Deep Learning in 2023
    Don’t roll down your sleeves just yet—one of our most-read tutorials in September, by Antonis Makropoulos, focuses on deep-learning hardware and infrastructure, and walks us through the nitty-gritty details of choosing the right components for your project’s needs.
  • Meta-Heuristics Explained: Ant Colony Optimization
    For a more theoretical—but no less fascinating—topic, Hennie de Harder’s introduction to ant-colony optimization draws our attention to a “lesser-known gem” of an algorithm, explores how it took inspiration from the ingenious foraging behaviors of ants, and unpacks its inner workings. (In a follow-up post, Hennie also demonstrates how it can solve real-world problems.)
  • Falcon 180B: Can It Run on Your Computer?
    Closing on an ambitious note, Benjamin Marie sets out to find out if one can run the (very, very large) Falcon 180B model on consumer-grade hardware. (Spoiler alert: yes, with a couple of caveats.) It’s a valuable resource for anyone who’s weighing the pros and cons of working on a local machine vs. using cloud services—especially now that more and more open-source LLMs are arriving on the scene.

Our latest cohort of new authors

Every month, we’re thrilled to see a fresh group of authors join TDS, each sharing their own unique voice, knowledge, and experience with our community. If you’re looking for new writers to explore and follow, just browse the work of our latest additions, including Rahul Nayak, Christian Burke, Aicha Bokbot, Jason Vega, Giuseppe Scalamogna, Masatake Hirono, Shachaf Poran, Aris Tsakpinis, Niccolò Granieri, Lazare Kolebka, Ninad Sohoni, Mina Ghashami, Carl Bettosi, Dominika Woszczyk, James Koh, PhD, Tom Corbin, Antonio Jimenez Caballero, Gijs van den Dool, Ramkumar K, Milan Janosov, Luke Zaruba, Sohrab Sani, James Hamilton, Ilija Lazarevic, Josh Poduska, Antonis Makropoulos, Yuichi Inoue, George Stavrakis, Yunzhe Wang, Anjan Biswas, Jared M. Maruskin, PhD, Michael Roizner, Alana Rister, Ph.D., Damian Gil, Shafquat Arefeen, Dmitry Kazhdan, Ryan Pégoud, and Robert Martin-Short.

Thank you for supporting the work of our authors! If you enjoy the articles you read on TDS, consider becoming a Medium member — it unlocks our entire archive (and every other post on Medium, too).

Until the next Variable,

TDS Editors

Prompt Engineering Tips, a Neural Network How-To, and Other Recent Must-Reads was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.

Source link

Be the first to comment

Leave a Reply

Your email address will not be published.