Machine Learning Icon

Machine Learning

Machine Learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
166 Stories
All Topics

Command line interface github.com

Command-line tools for speech and intent recognition on Linux

This isn’t merely a speech-to-text thing. It also provides intent recognition, which makes it great for doing voice commands. For example, when trained with this template, the following command:

$ voice2json transcribe-wav \
      < turn-on-the-light.wav | \
      voice2json recognize-intent | \
      jq .

Produces this JSON event:

{
    "text": "turn on the light",
    "intent": {
        "name": "LightState"
    },
    "slots": {
        "state": "on"
    }
}

And it can be retrained quickly enough to do it at runtime. Cool stuff!

AI (Artificial Intelligence) exxactcorp.com

Disentangling AI, machine learning, and deep learning

This article starts with a concise description of the relationship and differences of these 3 commonly used industry terms. Then it digs into the history.

Deep learning is a subset of machine learning, which in turn is a subset of artificial intelligence, but the origins of these names arose from an interesting history. In addition, there are fascinating technical characteristics that can differentiate deep learning from other types of machine learning…essential working knowledge for anyone with ML, DL, or AI in their skillset.

Disentangling AI, machine learning, and deep learning

The New Stack Icon The New Stack

How I built an on-premises AI training testbed with Kubernetes and Kubeflow

This is part 4 in a cool series on The New Stack exploring the Kubeflow machine learning platform.

I recently built a four-node bare metal Kubernetes cluster comprising CPU and GPU hosts for all my AI experiments. Though it makes economic sense to leverage the public cloud for provisioning the infrastructure, I invested a fortune in the AI testbed that’s within my line of sight.

The author shares many insights into the choices he made while building this dream setup.

How I built an on-premises AI training testbed with Kubernetes and Kubeflow

Python github.com

A PyTorch-based speech toolkit

SpeechBrain is an open-source and all-in-one speech toolkit based on PyTorch.

The goal is to create a single, flexible, and user-friendly toolkit that can be used to easily develop state-of-the-art speech technologies, including systems for speech recognition, speaker recognition, speech enhancement, multi-microphone signal processing and many others.

Currently in beta.

Python github.com

`whereami` uses WiFi signals & ML to locate you (within 2-10 meters)

If you’re adventurous and you want to learn to distinguish between couch #1 and couch #2 (i.e. 2 meters apart), it is the most robust when you switch locations and train in turn. E.g. first in Spot A, then in Spot B then start again with A. Doing this in spot A, then spot B and then immediately using “predict” will yield spot B as an answer usually. No worries, the effect of this temporal overfitting disappears over time. And, in fact, this is only a real concern for the very short distances. Just take a sample after some time in both locations and it should become very robust.

The linked project was “almost entirely copied” from the find project, which was written in Go. It then went on to inspire whereami.js. I bet you can guess what that is.

HackerNoon Icon HackerNoon

Why ML in production is (still) broken and ways we can fix it

Hamza Tahir on HackerNoon:

By now, chances are you’ve read the famous paper about hidden technical debt by Sculley et al. from 2015. As a field, we have accepted that the actual share of Machine Learning is only a fraction of the work going into successful ML projects. The resulting complexity, especially in the transition to “live” environments, lead to large amounts of failed ML projects never reaching production.

Productionizing ML workflows has been a trending topic on Practical AI lately…

Why ML in production is (still) broken and ways we can fix it

Elixir thinkingelixir.com

ML is coming to Elixir by way of José Valim's "Project Nx"

Elixir creator José Valim stopped by the Thinking Elixir podcast to reveal what he’s been working on for the past 3 months: Numerical Elixir!

This is an exciting development that brings Elixir into areas it hasn’t been used before. We also talk about what this means for Elixir and the community going forward. A must listen!

Queue up this episode and/or stay tuned for an upcoming episode of The Changelog where we’ll sit down with José after his LambdaDays demo to unpack things even more.

Machine Learning marksaroufim.substack.com

Machine Learning: The Great Stagnation

This piece by Mark Saroufim on the state of ML starts pretty salty:

Graduate Student Descent is one of the most reliable ways of getting state of the art performance in Machine Learning today and it’s also a fully parallelizable over as many graduate students or employees your lab has. Armed with Graduate Student Descent you are more likely to get published or promoted than if you took on uncertain projects.

and:

BERT engineer is now a full time job. Qualifications include:

  • Some bash scripting
  • Deep knowledge of pip (starting a new environment is the suckier version of practicing scales)
  • Waiting for new HuggingFace models to be released
  • Watching Yannic Kilcher’s new Transformer paper the day it comes out
  • Repeating what Yannic said at your team reading group

It’s kind of like Dev-ops but you get paid more.

But if you survive through (or maybe even enjoy) the lamentations and ranting, you’ll find some hope and optimism around specific projects that the author believes are pushing the industry through its Great Stagnation.

I learned a few things. Maybe you will too.

Machine Learning huyenchip.com

The MLOps tooling landscape in early 2021 (284 tools)

Chip Huyen:

While looking for these MLOps tools, I discovered some interesting points about the MLOps landscape:

  1. Increasing focus on deployment
  2. The Bay Area is still the epicenter of machine learning, but not the only hub
  3. MLOps infrastructures in the US and China are diverging
  4. More interests in machine learning production from academia

If MLOps is new to you, Practical AI did a deep dive on the topic that will help you sort it out. Or if you’d prefer a shallow dive… just watch this.

Machine Learning blog.exxactcorp.com

A friendly introduction to Graph Neural Networks

Graph neural networks (GNNs) belong to a category of neural networks that operate naturally on data structured as graphs. Despite being what can be a confusing topic, GNNs can be distilled into just a handful of simple concepts.

Practical uses of GNNS include making traffic predictions, search rankings, drug discovery, and more.

AI (Artificial Intelligence) nullprogram.com

You might not need machine learning

Chris Wellons:

Machine learning is a trendy topic, so naturally it’s often used for inappropriate purposes where a simpler, more efficient, and more reliable solution suffices. The other day I saw an illustrative and fun example of this: Neural Network Cars and Genetic Algorithms. The video demonstrates 2D cars driven by a neural network with weights determined by a generic algorithm. However, the entire scheme can be replaced by a first-degree polynomial without any loss in capability. The machine learning part is overkill.

Yet another example of a meta-trend in software: You might not need $X (where $X is a popular tool or technique that is on the upward side of the hype cycle).

Craig Kerstiens info.crunchydata.com

Building a recommendation engine inside Postgres with Python and Pandas

Craig Kerstiens told me about this on our recent Postgres episode of The Changelog and my jaw about dropped out of my mouth.

… earlier today I was starting to wonder why couldn’t I do more machine learning directly inside [Postgres]. Yeah, there is madlib, but what if I wanted to write my own recommendation engine? So I set out on a total detour of a few hours and lo and behold, I can probably do a lot more of this in Postgres than I realized before. What follows is a quick walkthrough of getting a recommendation engine setup directly inside Postgres.

Craig doesn’t necessarily suggest you put this kind of solution in production, but he doesn’t come out and say don’t do it either. 😉

Machine Learning blog.acolyer.org

The case for a learned sorting algorithm

Adrian Colyer walks us through a paper from SageDB that’s taking machine learning and applying it to old Computer Science problems such as sorting. Here’s the big idea:

Suppose you had a model that given a data item from a list, could predict its position in a sorted version of that list. 0.239806? That’s going to be at position 287! If the model had 100% accuracy, it would give us a completed sort just by running over the dataset and putting each item in its predicted position. There’s a problem though. A model with 100% accuracy would essentially have to see every item in the full dataset and memorise its position – there’s no way training and then using such a model can be faster than just sorting, as sorting is a part of its training! But maybe we can sample a subset of the data and get a model that is a useful approximation, by learning an approximation to the CDF (cumulative distribution function).

InfoQ Icon InfoQ

AI training method exceeds GPT-3 performance with 99.9% fewer parameters

A team of scientists at LMU Munich have developed Pattern-Exploiting Training (PET), a deep-learning training technique for natural language processing (NLP) models. Using PET, the team trained a Transformer NLP model with 223M parameters that out-performed the 175B-parameter GPT-3 by over 3 percentage points on the SuperGLUE benchmark.

AI (Artificial Intelligence) github.com

Unsplash makes available 2M+ images for research and machine learning

They’ve split the dataset up into two bundles:

  1. Lite, which you can download w/ a click, but is limited to 25K image
  2. Full, which you have to request access to and is limited to non-commercial use

This is interesting for a couple of reasons. First, it’s a great resource for anyone training models for image classification, etc. Second, it’s a nice business model for Unsplash as a startup.

OpenAI Icon OpenAI

OpenAI now has an API

For years now I’ve been asking AI/ML experts when these powerful-yet-complicated tools will become available to average developers like you and me. It’s happening! Just look at how high-level this text generation code sample is:

import openai

prompt = """snipped for brevity's sake"""

response = openai.Completion.create(model="davinci",
  prompt=prompt, 
  stop="\n",
  temperature=0.9,
  max_tokens=100)

They’re oftening all kinds of language tasks: semantic search, summarization, sentiment analysis, content generation, translation, and more. The API is still in beta and there’s a waitlist, but this is exciting news, nonetheless.

Uber Engineering Icon Uber Engineering

A uniform interface to run deep learning models from multiple frameworks

Neuropod is a library that provides a uniform interface to run deep learning models from multiple frameworks in C++ and Python. Neuropod makes it easy for researchers to build models in a framework of their choosing while also simplifying productionization of these models.

This looks nice because you can make your inference code framework agnostic and easily switch between frameworks if necessary. Currently supports TensorFlow, PyTorch, TorchScript, and Keras.

Python github.com

A research framework for reinforcement learning

Acme is a library of reinforcement learning (RL) agents and agent building blocks. Acme strives to expose simple, efficient, and readable agents, that serve both as reference implementations of popular algorithms and as strong baselines, while still providing enough flexibility to do novel research. The design of Acme also attempts to provide multiple points of entry to the RL problem at differing levels of complexity.

Python github.com

A modular toolbox for accelerating meta-learning research

Meta-Blocks is a modular toolbox for research, experimentation, and reproducible benchmarking of learning-to-learn algorithms. The toolbox provides flexible APIs for working with MetaDatasets, TaskDistributions, and MetaLearners (see the figure below). The APIs make it easy to implement a variety of meta-learning algorithms, run them on well-established and emerging benchmarks, and add your own meta-learning problems to the suite and benchmark algorithms on them.

This repo is still under “heavy construction” (a.k.a. unstable) so downloader beware, but it’s worth a star/bookmark for later use.

A modular toolbox for accelerating meta-learning research
0:00 / 0:00