Blog post cover
Arek Nawo
13 Jun 2019
4 min read

JS Tidbits - MediaStream API

Today I have something different for you. Instead of a full-blown post, we’re going with a tiny one! So, without further ado, let’s talk about MediaStream API!

The API itself can be considered relative to WebRTC - a collection of APIs, used for real-time data transfer. As the name of the API suggests, it can help you create streams of data, coming from the user’s input device (media), like a video (camera) or audio (microphone)!

Video and audio

At the heart of MediaStream API is the [.getUserMedia()](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia) method of [navigator.mediaDevices](https://developer.mozilla.org/en-US/docs/Web/API/Navigator/mediaDevices). Note that this object will only be available when you’re working in HTTPS-secured context.

async function getMedia() {
  const constraints = {
    // ...
  };
  let stream;

  try {
    stream = await navigator.mediaDevices.getUserMedia(constraints);
    // use the stream
  } catch (err) {
    // handle the error - user's rejection or no media available
  }
}

getMedia();

The method takes what’s called a constraints object, and returns a promise that resolves to a new [MediaStream](https://developer.mozilla.org/en-US/docs/Web/API/MediaStream) instance. Such an interface is a representation of currently streamed media. It consists of zero or more separate [MediaStreamTrack](https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack)s, each representing video or audio stream, whereas audio tracks consist of right and left channels (for stereo and stuff). These tracks also provide some special methods and events if you need further control. We won’t be going so in-depth tho.

Constraints

Let’s now talk about our constraints object, as this is a very important piece. It’s used to configure our .getUserMedia() request and the resulting stream. This object can have two properties of either boolean or object value - audio and video.

const constraints = {
    video: true,
    audio: true
};
// ...

By setting both of them to true, we’re requesting the access to user’s default video and audio input devices, with default settings applied. Know that in order for your .getUserMedia() call to work, you have to set at least one of those two properties.

If you want to further configure the settings for your media source device, you’ll need to pass an object. The list of properties available here is quite long and differences, based on the type of track it’s applied to (video, audio). You can see a complete list of those here, and check for the available ones with [.getSupportedConstraints()](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getSupportedConstraints) method.

So, let’s say that we want to be a bit more specific this time, and specify some additional configuration for our video track. Let’s say it’ll be its width, height, and input device.

async function getConstraints() {
  const supportedConstraints = navigator.mediaDevices.getSupportedConstraints();
  const video = {};

  if (supportedConstraints.width) {
    video.width = 1920;
  }
  if (supportedConstraints.height) {
    video.height = 1080;
  }
  if (supportedConstraints.deviceId) {
    const devices = await navigator.mediaDevices.enumerateDevices();
    const device = devices.find(device => {
      return device.kind == "videoinput";
    });
    video.deviceId = device.deviceId;
  }

  return { video };
}
// ...

Notice how we check if given the constraint is supported (although all used above should always be - unless there’s no input device), and use [.enumerateDevices()](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/enumerateDevices) method to check for available input devices, together with setting video’s resolution. It returns a promise which resolves to an array of [MediaDeviceInfo](https://developer.mozilla.org/en-US/docs/Web/API/MediaDeviceInfo) objects, which we later use, to select the first one in the line for video input. Then, a simple edit to our getMedia() function, and we’re good to go!

// ...
const constraints = await getConstraints();
// ...

Even smaller tidbits…

You might be interested in hearing that’s there’s also a method for accessing the device screen (aka Screen Capture API), in the form of [.getDisplayMedia()](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia) method (small browser support).

navigator.mediaDevices.getDisplayMedia();

There’s a lot of potential in using the .getUserMedia() and .getDisplayMedia() methods. Especially when combined with e.g. Canvas and Web Audio APIs. But, as this is a topic for another day, let’s just preview our streams with simple <video/> and <audio/> elements.

<video autoplay></video>
<audio autoplay></audio>
// ...
// inside getMedia() after successfully retrieveing MediaStream
const video = document.querySelector("video");
const audio = document.querySelector("audio");
video.srcObject = stream;
audio.srcObject = stream;
// ...

Small but simple?

Now, I’d love to hear your opinions about such kind of blog posts. Not all content is created equal - some just doesn’t need that much to be said. So, let me know with a comment or a reaction below if you’d like to see more of such posts in the future, and if so, maybe a small topic suggestion? For now, as always, consider following me on Twitter, on my Facebook page, or through my weekly newsletter for more, and have a great day!

If you need

Custom Web App

I can help you get your next project, from idea to reality.

© 2024 Arek Nawo Ideas