大家好!今天这篇主要是实作浏览器上的录音与录影功能,这边先列出几个会做到的目标
显示视讯画面与声音显示萤幕分享画面与声音截图与下载录製画面与声音那么在开始之前先来认识一下这次会使用到的 JavaScript API,重点有三个,分别为 mediaDevices
、MediaStream
与 MediaRecorder
,以下开始介绍~
mediaDevices
mediaDevices 主要用来获取我们连接的媒体设备,因为媒体设备需要较高的安全性,所以有几点限制
需要使用者同意才可以使用只有在 https 与本地端才可以使用,在 http 无法正常运作如果想在 http 网域下使用须开启浏览器设定getUserMedia
getUserMedia 可以撷取麦克风与视讯镜头的 MediaStream 供显示,详细的设定值可以参考这里
const constraints = { audio: true, video: true }navigator.mediaDevices.getUserMedia(constraints).then(stream => { // do something...}).catch(err => { // do something...})
getDisplayMedia
getDisplayMedia 可以撷取萤幕画面的 MediaStream 供显示,可选择分享萤幕画面、应用程式或是网页分页,详细的设定值可以参考这里
const constraints = { audio: true, video: true }navigator.mediaDevices.getDisplayMedia(constraints).then(stream => { // do something...}).catch(err => { // do something...})
enumerateDevices
enumerateDevices 会列出目前可使用的设备资讯
navigator.mediaDevices.enumerateDevices().then(devices => { // do something...}).catch(err => { // do something...})
getSupportedConstraints
getSupportedConstraints 会列出所有支援的设定属性
navigator.mediaDevices.getSupportedConstraints().then(constraints => { // do something...}).catch(err => { // do something...})
MediaStream
MediaStream 是一个影片或者音讯的 Stream,可以藉由我们刚刚介绍的 mediaDevices 来撷取,其中包含了多个 Track,例如视讯镜头撷取了影像与声音,那就会有两个 Track,获取 Track 的方法如下
mediaStream.getTracks().forEach(track => { // do something...})
如果要动态改变设定或是监听某些事件基本上都是对 Track 进行操作
MediaRecorder
MediaRecorder 是一个可以记录 MediaStream 的方法,使用方式如下
const constraints = { audio: true, video: true }navigator.mediaDevices.getUserMedia(constraints).then(stream => { const options = { audioBitsPerSecond: 128000, videoBitsPerSecond: 2500000, mimeType: 'video/webm' } const mediaRecorder = new MediaRecorder(stream, options) mediaRecorder.addEventListener('dataavailable', e => { // do something... }) mediaRecorder.start() setTimeout(() => { mediaRecorder.stop() }, 50000)}).catch(err => { // do something...})
在停止录製后会触发 dataavailable
事件,并将 Blob
档案夹在事件内,而前面的 MIME 类型比较麻烦,可以使用 MediaRecorder.isTypeSupported(mimeType)
来检查支援度
实作开始
首先我们先把需要用到的几个按钮跟显示区块给做出来,video
用来显示视讯画面与萤幕分享画面,而 img
与 canvas
则是用来截图与显示截图
<div> <button id="camera">视讯镜头</button> <button id="screen">萤幕画面</button> <button id="screenshot">截图</button> <button id="screenshot-download">下载截图</button> <button id="video-start">开始录影</button> <button id="video-download">停止录影并下载</button> <button id="recording-start">开始录音</button> <button id="recording-download">停止录音并下载</button> <button id="stop">停止</button></div><div> <video id="video"></video> <img id="img"> <canvas id="canvas" style="display: none;"></canvas></div>
目标一 - 显示视讯画面与声音
这边使用到我们用刚刚介绍到的 getUserMedia
来撷取视讯画面
let cameraStreamconst camera = document.querySelector('#camera')const stop = document.querySelector('#stop')const video = document.querySelector('#video')const constraints = { audio: true, video: true }camera.addEventListener('click', () => { navigator.mediaDevices.getUserMedia(constraints).then(stream => { cameraStream = stream video.srcObject = stream video.play() })})stop.addEventListener('click', () => { if (cameraStream) { cameraStream.getTracks().forEach(track => { track.stop() }) cameraStream = null }})
完成结果如下
目标二 - 显示萤幕分享画面与声音
一样使用刚刚介绍到的 getDisplayMedia
来撷取萤幕分享画面
let screenStreamconst screen = document.querySelector('#screen')const stop = document.querySelector('#stop')const video = document.querySelector('#video')const constraints = { audio: true, video: true }screen.addEventListener('click', () => { navigator.mediaDevices.getDisplayMedia(constraints).then(stream => { screenStream = stream video.srcObject = stream video.play() })})stop.addEventListener('click', () => { if (screenStream) { screenStream.getTracks().forEach(track => { track.stop() }) screenStream = null }})
完成结果如下
因为分享萤幕画面或使用视讯镜头时要将其他设备停止运作,所以我们把共用的逻辑拆出来
let cameraStreamlet screenStreamconst camera = document.querySelector('#camera')const screen = document.querySelector('#screen')const stop = document.querySelector('#stop')const video = document.querySelector('#video')function stopAllStream () { if (cameraStream) { cameraStream.getTracks().forEach(track => { track.stop() }) cameraStream = null } if (screenStream) { screenStream.getTracks().forEach(track => { track.stop() }) screenStream = null }}const constraints = { audio: true, video: true }camera.addEventListener('click', () => { navigator.mediaDevices.getUserMedia(constraints).then(stream => { stopAllStream() cameraStream = stream video.srcObject = stream video.play() })})screen.addEventListener('click', () => { navigator.mediaDevices.getDisplayMedia(constraints).then(stream => { stopAllStream() screenStream = stream video.srcObject = stream video.play() })})stop.addEventListener('click', stopAllStream)
目标三 - 截图与下载
截图会用到一些转换的技巧,可以参考之前小弟写的 图片转换处理,这边我就不多做解释了~
let screenshotBlobUrlconst screenshot = document.querySelector('#screenshot')const screenshotDownload = document.querySelector('#screenshot-download')const video = document.querySelector('#video')const img = document.querySelector('#img')const canvas = document.querySelector('#canvas')const ctx = canvas.getContext('2d')screenshot.addEventListener('click', () => { if (!cameraStream && !screenStream) return const width = video.offsetWidth const height = video.offsetHeight canvas.width = width canvas.height = height ctx.drawImage(video, 0, 0, width, height) canvas.toBlob(blob => { screenshotBlobUrl = window.URL.createObjectURL(blob) img.src = screenshotBlobUrl })})screenshotDownload.addEventListener('click', () => { if (!screenshotBlobUrl) return const downloadLink = document.createElement('a') downloadLink.href = screenshotBlobUrl downloadLink.download = 'fileName' downloadLink.click()})
现在不管使用分享萤幕还是视讯镜头都可以截图也可以下载啰!
目标四 - 录製画面与声音
接着一样使用刚刚介绍到的 MediaRecorder
来录製影片与声音
let videoMediaRecorderlet recordingMediaRecorderconst videoStart = document.querySelector('#video-start')const videoDownload = document.querySelector('#video-download')const recordingStart = document.querySelector('#recording-start')const recordingDownload = document.querySelector('#recording-download')videoStart.addEventListener('click', () => { if (!cameraStream && !screenStream) return const currentStream = cameraStream || screenStream const options = { audioBitsPerSecond: 128000, videoBitsPerSecond: 2500000, mimeType: 'video/webm' } const mediaRecorder = new MediaRecorder(currentStream, options) videoMediaRecorder = mediaRecorder mediaRecorder.addEventListener('dataavailable', e => { const blob = new Blob([e.data], { type: 'video/mp4' }) const downloadLink = document.createElement('a') downloadLink.href = window.URL.createObjectURL(blob) downloadLink.download = 'videoName' downloadLink.click() }) mediaRecorder.start()})videoDownload.addEventListener('click', () => { if (!videoMediaRecorder) return videoMediaRecorder.stop()})recordingStart.addEventListener('click', () => { if (!cameraStream && !screenStream) return const currentStream = cameraStream || screenStream const options = { audioBitsPerSecond: 128000, mimeType: 'audio/webm' } const mediaRecorder = new MediaRecorder(currentStream, options) recordingMediaRecorder = mediaRecorder mediaRecorder.addEventListener('dataavailable', e => { const blob = new Blob([e.data], { type: 'audio/mp4' }) const downloadLink = document.createElement('a') downloadLink.href = window.URL.createObjectURL(blob) downloadLink.download = 'recordingName' downloadLink.click() }) mediaRecorder.start()})recordingDownload.addEventListener('click', () => { if (!recordingMediaRecorder) return recordingMediaRecorder.stop()})
结语
现在影音的 API 弄得相当简单可以上手,如果不是用旧版的浏览器基本上都不太会有问题(有需要支援的帮QQ),做这种有画面的东西还是相当有趣的,各位有空有可以动手玩看看啦~