AI (Artificial Intelligence) Icon

AI (Artificial Intelligence)

Machines simulating human characteristics and intelligence.
212 Stories
All Topics

Machine Learning marksaroufim.substack.com

Machine Learning: The Great Stagnation

This piece by Mark Saroufim on the state of ML starts pretty salty:

Graduate Student Descent is one of the most reliable ways of getting state of the art performance in Machine Learning today and it’s also a fully parallelizable over as many graduate students or employees your lab has. Armed with Graduate Student Descent you are more likely to get published or promoted than if you took on uncertain projects.

and:

BERT engineer is now a full time job. Qualifications include:

  • Some bash scripting
  • Deep knowledge of pip (starting a new environment is the suckier version of practicing scales)
  • Waiting for new HuggingFace models to be released
  • Watching Yannic Kilcher’s new Transformer paper the day it comes out
  • Repeating what Yannic said at your team reading group

It’s kind of like Dev-ops but you get paid more.

But if you survive through (or maybe even enjoy) the lamentations and ranting, you’ll find some hope and optimism around specific projects that the author believes are pushing the industry through its Great Stagnation.

I learned a few things. Maybe you will too.

Practical AI Practical AI #119

Accelerating ML innovation at MLCommons

MLCommons launched in December 2020 as an open engineering consortium that seeks to accelerate machine learning innovation and broaden access to this critical technology for the public good. David Kanter, the executive director of MLCommons, joins us to discuss the launch and the ambitions of the organization.

In particular we discuss the three pillars of the organization: Benchmarks and Metrics (e.g. MLPerf), Datasets and Models (e.g. People’s Speech), and Best Practices (e.g. MLCube).

Practical AI Practical AI #118

The $1 trillion dollar ML model 💵

American Express is running what is perhaps the largest commercial ML model in the world; a model that automates over 8 billion decisions, ingests data from over $1T in transactions, and generates decisions in mere milliseconds or less globally. Madhurima Khandelwal, head of AMEX AI Labs, joins us for a fascinating discussion about scaling research and building robust and ethical AI-driven financial applications.

Practical AI Practical AI #116

Engaging with governments on AI for good

At this year’s Government & Public Sector R Conference (or R|Gov) our very own Daniel Whitenack moderated a panel on how AI practitioners can engage with governments on AI for good projects. That discussion is being republished in this episode for all our listeners to enjoy!

The panelists were Danya Murali from Arcadia Power and Emily Martinez from the NYC Department of Health and Mental Hygiene. Danya and Emily gave some great perspectives on sources of government data, ethical uses of data, and privacy.

Practical AI Practical AI #115

From research to product at Azure AI

Bharat Sandhu, Director of Azure AI and Mixed Reality at Microsoft, joins Chris and Daniel to talk about how Microsoft is making AI accessible and productive for users, and how AI solutions can address real world challenges that customers face. He also shares Microsoft’s research-to-product process, along with the advances they have made in computer vision, image captioning, and how researchers were able to make AI that can describe images as well as people do.

Machine Learning blog.exxactcorp.com

A friendly introduction to Graph Neural Networks

Graph neural networks (GNNs) belong to a category of neural networks that operate naturally on data structured as graphs. Despite being what can be a confusing topic, GNNs can be distilled into just a handful of simple concepts.

Practical uses of GNNS include making traffic predictions, search rankings, drug discovery, and more.

Practical AI Practical AI #114

The world's largest open library dataset

Unsplash has released the world’s largest open library dataset, which includes 2M+ high-quality Unsplash photos, 5M keywords, and over 250M searches. They have big ideas about how the dataset might be used by ML/AI folks, and there have already been some interesting applications. In this episode, Luke and Tim discuss why they released this data and what it take to maintain a dataset of this size.

AI (Artificial Intelligence) nullprogram.com

You might not need machine learning

Chris Wellons:

Machine learning is a trendy topic, so naturally it’s often used for inappropriate purposes where a simpler, more efficient, and more reliable solution suffices. The other day I saw an illustrative and fun example of this: Neural Network Cars and Genetic Algorithms. The video demonstrates 2D cars driven by a neural network with weights determined by a generic algorithm. However, the entire scheme can be replaced by a first-degree polynomial without any loss in capability. The machine learning part is overkill.

Yet another example of a meta-trend in software: You might not need $X (where $X is a popular tool or technique that is on the upward side of the hype cycle).

Practical AI Practical AI #113

A casual conversation concerning causal inference

Lucy D’Agostino McGowan, cohost of the Casual Inference Podcast and a professor at Wake Forest University, joins Daniel and Chris for a deep dive into causal inference. Referring to current events (e.g. misreporting of COVID-19 data in Georgia) as examples, they explore how we interact with, analyze, trust, and interpret data - addressing underlying assumptions, counterfactual frameworks, and unmeasured confounders (Chris’s next Halloween costume).

Practical AI Practical AI #112

Building a deep learning workstation

What’s it like to try and build your own deep learning workstation? Is it worth it in terms of money, effort, and maintenance? Then once built, what’s the best way to utilize it? Chris and Daniel dig into questions today as they talk about Daniel’s recent workstation build. He built a workstation for his NLP and Speech work with two GPUs, and it has been serving him well (minus a few things he would change if he did it again).

Learn github.com

A roadmap to becoming an AI expert in 2020

Below you find a set of charts demonstrating the paths that you can take and the technologies that you would want to adopt in order to become a data scientist, machine learning or an ai expert. We made these charts for our new employees to make them AI Experts but we wanted to share them here to help the community.

I didn’t embed the roadmap images because they are too many and too vertical to fit. It sound like an interactive version is Coming Soon™️, but don’t wait on that to get started here. 2020 is almost over. 😉

Practical AI Practical AI #109

When data leakage turns into a flood of trouble

Rajiv Shah teaches Daniel and Chris about data leakage, and its major impact upon machine learning models. It’s the kind of topic that we don’t often think about, but which can ruin our results. Raj discusses how to use activation maps and image embedding to find leakage, so that leaking information in our test set does not find its way into our training set.

Practical AI Practical AI #108

Productionizing AI at LinkedIn

Suju Rajan from LinkedIn joined us to talk about how they are operationalizing state-of-the-art AI at LinkedIn. She sheds light on how AI can and is being used in recruiting, and she weaves in some great explanations of how graph-structured data, personalization, and representation learning can be applied to LinkedIn’s candidate search problem. Suju is passionate about helping people deal with machine learning technical debt, and that gives this episode a good dose of practicality.

InfoQ Icon InfoQ

AI training method exceeds GPT-3 performance with 99.9% fewer parameters

A team of scientists at LMU Munich have developed Pattern-Exploiting Training (PET), a deep-learning training technique for natural language processing (NLP) models. Using PET, the team trained a Transformer NLP model with 223M parameters that out-performed the 175B-parameter GPT-3 by over 3 percentage points on the SuperGLUE benchmark.

NVIDIA Developer Blog Icon NVIDIA Developer Blog

NVIDIA's new GAN reduces video bandwidth by orders of magnitude

This is bonkers:

New AI breakthroughs in NVIDIA Maxine, cloud-native video streaming AI SDK, slash bandwidth use while make it possible to re-animate faces, correct gaze and animate characters for immersive and engaging meetings.

Instead of transferring your face at N frames per second, they transfer it once at the beginning of the call and then update key positions over time. The results are super impressive (and just a bit creepy?).

Microsoft github.com

Microsoft's deep learning approach to restoring old photos

What’s linked is the official PyTorch implementation of a paper published in April of this year called Bringing Old Photos Back to Life.

We propose to restore old photos that suffer from severe degradation through a deep learning approach. Unlike conventional restoration tasks that can be solved through supervised learning, the degradation in real photos is complex and the domain gap between synthetic images and real old photos makes the network fail to generalize. Therefore, we propose a novel triplet domain translation network by leveraging real photos along with massive synthetic image pairs. Specifically, we train two variational autoencoders (VAEs) to respectively transform old photos and clean photos into two latent spaces.

The results are impressive!

Microsoft's deep learning approach to restoring old photos

Practical AI Practical AI #106

Learning about (Deep) Learning

In anticipation of the upcoming NVIDIA GPU Technology Conference (GTC), Will Ramey joins Daniel and Chris to talk about education for artificial intelligence practitioners, and specifically the role that the NVIDIA Deep Learning Institute plays in the industry. Will’s insights from long experience are shaping how we all stay on top of AI, so don’t miss this ‘must learn’ episode.

Practical AI Practical AI #105

When AI goes wrong

So, you trained a great AI model and deployed it in your app? It’s smooth sailing from there right? Well, not in most people’s experience. Sometimes things goes wrong, and you need to know how to respond to a real life AI incident. In this episode, Andrew and Patrick from BNH.ai join us to discuss an AI incident response plan along with some general discussion of debugging models, discrimination, privacy, and security.

Practical AI Practical AI #104

Speech tech and Common Voice at Mozilla

Many people are excited about creating usable speech technology. However, most of the audio data used by large companies isn’t available to the majority of people, and that data is often biased in terms of language, accent, and gender. Jenny, Josh, and Remy from Mozilla join us to discuss how Mozilla is building an open-source voice database that anyone can use to make innovative apps for devices and the web (Common Voice). They also discuss efforts through Mozilla fellowship program to develop speech tech for African languages and understand bias in data sets.

0:00 / 0:00