websocket探寻其与语音、图片的技能

2015/12/26 · JavaScript
· 3 评论 ·
websocket

初稿出处:
AlloyTeam   

提及websocket想比我们不会不熟悉,借使面生的话也没提到,一句话回顾

【金沙注册送58】websocket研究其与话音。“WebSocket protocol
是HTML伍1种新的情商。它达成了浏览器与服务器全双工通讯”

WebSocket相相比较守旧那三个服务器推技巧大致好了太多,大家得以挥手向comet和长轮询这一个技艺说拜拜啦,庆幸大家生活在具有HTML5的时期~

那篇文章大家将分三局地搜求websocket

先是是websocket的广大使用,其次是截然自身营造服务器端websocket,最后是至关心器重要介绍利用websocket制作的多个demo,传输图片和在线语音聊天室,let’s
go

一、websocket常见用法

此间介绍三种自己觉着大规模的websocket完成……(瞩目:本文创设在node上下文意况

1、socket.io

先给demo

JavaScript

var http = require(‘http’); var io = require(‘socket.io’); var server =
http.createServer(function(req, res) { res.writeHeader(200,
{‘content-type’: ‘text/html;charset=”utf-8″‘}); res.end();
}).listen(8888); var socket =.io.listen(server);
socket.sockets.on(‘connection’, function(socket) { socket.emit(‘xxx’,
{options}); socket.on(‘xxx’, function(data) { // do someting }); });

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require(‘http’);
var io = require(‘socket.io’);
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {‘content-type’: ‘text/html;charset="utf-8"’});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on(‘connection’, function(socket) {
    socket.emit(‘xxx’, {options});
 
    socket.on(‘xxx’, function(data) {
        // do someting
    });
});

深信驾驭websocket的同室不恐怕不驾驭socket.io,因为socket.io太有名了,也很棒,它本人对逾期、握手等都做了管理。作者质疑那也是落成websocket使用最多的方法。socket.io最最最地道的有些正是优雅降级,当浏览器不协助websocket时,它会在里面优雅降级为长轮询等,用户和开荒者是不供给关注具体落成的,很有益。

唯独专门的学业是有两面性的,socket.io因为它的完美也推动了坑的地方,最要紧的便是臃肿,它的卷入也给多少推动了较多的报道冗余,而且优雅降级那壹独到之处,也陪伴浏览器标准化的拓展稳步失去了豪杰

Chrome Supported in version 4+
Firefox Supported in version 4+
Internet Explorer Supported in version 10+
Opera Supported in version 10+
Safari Supported in version 5+

在此间不是批评说socket.io倒霉,已经被淘汰了,而是有时候大家也得以设想部分别样的贯彻~

 

2、http模块

刚巧说了socket.io臃肿,那今后就来讲说便捷的,首先demo

JavaScript

var http = require(‘http’); var server = http.createServer();
server.on(‘upgrade’, function(req) { console.log(req.headers); });
server.listen(8888);

1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);

极粗略的兑现,其实socket.io内部对websocket也是如此落成的,可是前面帮大家封装了一些handle管理,这里大家也得以本身去丰盛,给出两张socket.io中的源码图

金沙注册送58 1

金沙注册送58 2

 

3、ws模块

末尾有个例证会用到,这里就提一下,前边具体看~

 

2、自身实现1套server端websocket

赶巧说了三种常见的websocket落成形式,以往大家思虑,对于开垦者来讲

websocket相对于守旧http数据交互情势以来,扩展了服务器推送的轩然大波,客户端接收到事件再进行相应管理,开采起来区别并不是太大呀

那是因为这几个模块已经帮大家将数码帧解析此间的坑都填好了,第三片段大家将尝试自个儿创制壹套简便的服务器端websocket模块

谢谢次碳酸钴的斟酌援助,作者在这里那壹部分只是简短说下,若是对此有乐趣好奇的请百度【web才干切磋所】

和睦成功服务器端websocket首要有两点,2个是行使net模块接受数据流,还有一个是对待官方的帧结构图解析数据,完结那两局地就曾经做到了方方面面包车型大巴平底职业

第一给四个客户端发送websocket握手报文的抓包内容

客户端代码很简短

JavaScript

ws = new WebSocket(“ws://127.0.0.1:8888”);

1
ws = new WebSocket("ws://127.0.0.1:8888");

金沙注册送58 3

服务器端要本着那一个key验证,正是讲key加上2个特定的字符串后做1遍sha1运算,将其结果调换为base6四送回去

JavaScript

var crypto = require(‘crypto’); var WS =
‘258EAFA5-E914-47DA-95CA-C5AB0DC85B11’;
require(‘net’).createServer(function(o) { var key;
o.on(‘data’,function(e) { if(!key) { // 获取发送过来的KEY key =
e.toString().match(/Sec-WebSocket-Key: (.+)/)[1]; //
连接上WS这几个字符串,并做二回sha一运算,最终调换来Base64 key =
crypto.createHash(‘sha一’).update(key+WS).digest(‘base6肆’); //
输出再次来到给客户端的数码,这么些字段都是必须的 o.write(‘HTTP/壹.1 拾1Switching Protocols\r\n’); o.write(‘Upgrade: websocket\r\n’);
o.write(‘Connection: Upgrade\r\n’); // 这几个字段带上服务器管理后的KEY
o.write(‘Sec-WebSocket-Accept: ‘+key+’\r\n’); //
输出空行,使HTTP头截至 o.write(‘\r\n’); } }); }).listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require(‘crypto’);
var WS = ‘258EAFA5-E914-47DA-95CA-C5AB0DC85B11’;
 
require(‘net’).createServer(function(o) {
var key;
o.on(‘data’,function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash(‘sha1’).update(key+WS).digest(‘base64’);
// 输出返回给客户端的数据,这些字段都是必须的
o.write(‘HTTP/1.1 101 Switching Protocols\r\n’);
o.write(‘Upgrade: websocket\r\n’);
o.write(‘Connection: Upgrade\r\n’);
// 这个字段带上服务器处理后的KEY
o.write(‘Sec-WebSocket-Accept: ‘+key+’\r\n’);
// 输出空行,使HTTP头结束
o.write(‘\r\n’);
}
});
}).listen(8888);

这么握手部分就曾经成功了,前面就是多少帧解析与转移的活了

先看下官方提供的帧结构暗中提示图

金沙注册送58 4

简简单单介绍下

FIN为是不是得了的标记

卡宴SV为留住空间,0

opcode标志数据类型,是不是分片,是还是不是二进制解析,心跳包等等

付出一张opcode对应图

金沙注册送58 5

MASK是不是采纳掩码

Payload len和前面extend payload length表示数据长度,这么些是最麻烦的

PayloadLen唯有陆位,换到无符号整型的话唯有0到1二七的取值,这么小的数值当然不可能描述相当大的数码,由此鲜明当数码长度小于或等于1二伍时候它才作为数据长度的讲述,假使那几个值为1贰陆,则时候背后的多个字节来储存数据长度,假使为12七则用前面多少个字节来囤积数据长度

Masking-key掩码

上面贴出解析数据帧的代码

JavaScript

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i]
>> 7, Opcode: e[i++] & 15, Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F }; if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++]; }
if(frame.PayloadLength === 127) { i += 4; frame.PayloadLength =
(e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8)

  • e[i++]; } if(frame.Mask) { frame.MaskingKey = [e[i++], e[i++],
    e[i++], e[i++]]; for(j = 0, s = []; j < frame.PayloadLength;
    j++) { s.push(e[i+j] ^ frame.MaskingKey[金沙注册送58 ,j%4]); } } else { s =
    e.slice(i, i+frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode
    === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i++] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++];
}
 
if(frame.PayloadLength === 127) {
i += 4;
frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8) + e[i++];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]];
 
for(j = 0, s = []; j < frame.PayloadLength; j++) {
s.push(e[i+j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i+frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}

下一场是浮动数据帧的

JavaScript

function encodeDataFrame(e) { var s = [], o = new
Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) +
e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0,
0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00)
>> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) + e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}

都以根据帧结构暗中提示图上的去处理,在这里不细讲,小说首要在下一些,倘使对那块感兴趣的话能够移动web本事研讨所~

 

三、websocket传输图片和websocket语音聊天室

正片环节到了,那篇小说最要害的大概呈现一下websocket的局地施用意况

1、传输图片

我们先考虑传输图片的步骤是怎么,首先服务器收到到客户端请求,然后读取图片文件,将贰进制数据转载给客户端,客户端怎么着管理?当然是行使FileReader对象了

先给客户端代码

JavaScript

var ws = new WebSocket(“ws://xxx.xxx.xxx.xxx:888八”); ws.onopen =
function(){ console.log(“握手成功”); }; ws.onmessage = function(e) { var
reader = new File里德r(); reader.onload = function(event) { var
contents = event.target.result; var a = new Image(); a.src = contents;
document.body.appendChild(a); } reader.readAsDataU大切诺基L(e.data); };

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};

吸纳到音讯,然后readAsDataU大切诺基L,间接将图纸base6肆增多到页面中

转到服务器端代码

JavaScript

fs.readdir(“skyland”, function(err, files) { if(err) { throw err; }
for(var i = 0; i < files.length; i++) { fs.readFile(‘skyland/’ +
files[i], function(err, data) { if(err) { throw err; }
o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) {
var s = [], l = buf.length, ret = []; s.push((1 << 7) + 2);
if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126,
(l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0,
(l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00)
>> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i++) {
fs.readFile(‘skyland/’ + files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) + 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}

注意s.push((1 << 7) +
2)
这一句,这里非凡直接把opcode写死了为二,对于Binary
Frame,这样客户端接收到数量是不会尝试举办toString的,不然会报错~

代码很简短,在此地向大家大饱眼福一下websocket传输图片的进程怎样

测试大多张图片,总共八.2四M

一般说来静态财富服务器须求20s左右(服务器较远)

cdn需要2.8s左右

那我们的websocket格局呢??!

答案是一致需求20s左右,是否很失望……速度正是慢在传输上,并不是服务器读取图片,本机上1致的图形资源,一s左右能够实现……那样看来数据流也无力回天冲破距离的限制升高传输速度

上面大家来看望websocket的另二个用法~

 

用websocket搭建语音聊天室

先来整治一下语音聊天室的意义

用户进入频道随后从Mike风输入音频,然后发送给后台转载给频道里面包车型地铁其余人,其余人接收到新闻进行播放

看起来困难在七个地点,第3个是节奏的输入,第一是收纳到数量流举行广播

先说音频的输入,这里运用了HTML五的getUserMedia方法,但是注意了,其一措施上线是有大坑的,最后说,先贴代码

JavaScript

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true },
function (stream) { var rec = new SRecorder(stream); recorder = rec; })
}

1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}

第贰个参数是{audio:
true},只启用音频,然后创造了1个SRecorder对象,后续的操作基本上都在那么些目标上举行。此时只要代码运转在地点的话浏览器应该晋升您是不是启用Mike风输入,明确之后就开行了

接下去大家看下SRecorder构造函数是啥,给出主要的1部分

JavaScript

var SRecorder = function(stream) { …… var context = new AudioContext();
var audioInput = context.createMediaStreamSource(stream); var recorder =
context.createScriptProcessor(4096, 1, 1); …… }

1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}

奥迪(Audi)oContext是3个节奏上下文对象,有做过声音过滤管理的同班应该领悟“壹段音频达到扬声器进行播报在此之前,半路对其进展阻挠,于是我们就收获了节奏数据了,这么些拦截专门的学问是由window.奥迪oContext来做的,大家有着对旋律的操作都依照那个目的”,大家能够经过奥迪(Audi)oContext创制不相同的奥迪(Audi)oNode节点,然后增添滤镜播放尤其的音响

录音原理同样,大家也亟需走奥迪oContext,可是多了一步对迈克风音频输入的选用上,而不是像在此以前管理音频一下用ajax请求音频的ArrayBuffer对象再decode,Mike风的收受须求用到createMediaStreamSource方法,注意那么些参数正是getUserMedia方法第贰个参数的参数

何况createScriptProcessor方法,它官方的解释是:

Creates a ScriptProcessorNode, which can be used for direct audio
processing via JavaScript.

——————

包罗下正是这么些办法是运用JavaScript去管理音频搜聚操作

终于到点子收罗了!胜利就在前面!

接下去让大家把Mike风的输入和拍子收罗相连起来

JavaScript

audioInput.connect(recorder); recorder.connect(context.destination);

1
2
audioInput.connect(recorder);
recorder.connect(context.destination);

context.destination官方表明如下

The destination property of
the AudioContext interface
returns
an AudioDestinationNoderepresenting
the final destination of all audio in the context.

——————

context.destination重回代表在条件中的音频的最后目标地。

好,到了此时,大家还亟需贰个监听音频搜集的事件

JavaScript

recorder.onaudioprocess = function (e) {
audioData.input(e.inputBuffer.getChannelData(0)); }

1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}

audioData是贰个目的,这些是在英特网找的,笔者就加了贰个clear方法因为后边会用到,首要有不行encodeWAV方法非常的赞,外人进行了往往的音频压缩和优化,那么些最终会陪伴完整的代码一同贴出来

那时候全体用户进入频道随后从迈克风输入音频环节就已经到位啦,下边就该是向劳动器端发送音频流,稍微有点蛋疼的来了,刚才大家说了,websocket通过opcode差异能够代表回去的多寡是文件照旧二进制数据,而笔者辈onaudioprocess中input进去的是数组,最后播放音响要求的是Blob,{type:
‘audio/wav’}的靶子,那样我们就非得要在发送之前将数组转换到WAV的Blob,此时就用到了上边说的encodeWAV方法

服务器就如很简单,只要转载就行了

本地质衡量试确实能够,可是天坑来了!将顺序跑在服务器上时候调用getUserMedia方法提示作者不能够不在三个安全的情形,也正是亟需https,那代表ws也不能够不换到wss……于是服务器代码就从未有过行使大家和睦包装的抓手、解析和编码了,代码如下

JavaScript

var https = require(‘https’); var fs = require(‘fs’); var ws =
require(‘ws’); var userMap = Object.create(null); var options = { key:
fs.readFileSync(‘./privatekey.pem’), cert:
fs.readFileSync(‘./certificate.pem’) }; var server =
https.createServer(options, function(req, res) { res.writeHead({
‘Content-Type’ : ‘text/html’ }); fs.readFile(‘./testaudio.html’,
function(err, data) { if(err) { return ; } res.end(data); }); }); var
wss = new ws.Server({server: server}); wss.on(‘connection’, function(o)
{ o.on(‘message’, function(message) { if(message.indexOf(‘user’) === 0)
{ var user = message.split(‘:’)[1]; userMap[user] = o; } else {
for(var u in userMap) { userMap[u].send(message); } } }); });
server.listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require(‘https’);
var fs = require(‘fs’);
var ws = require(‘ws’);
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync(‘./privatekey.pem’),
    cert: fs.readFileSync(‘./certificate.pem’)
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        ‘Content-Type’ : ‘text/html’
    });
 
    fs.readFile(‘./testaudio.html’, function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on(‘connection’, function(o) {
    o.on(‘message’, function(message) {
if(message.indexOf(‘user’) === 0) {
    var user = message.split(‘:’)[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);

代码依旧很轻易的,使用https模块,然后用了启幕说的ws模块,userMap是仿照的频段,只兑现转载的为主职能

动用ws模块是因为它十二分https落成wss实在是太有利了,和逻辑代码0争辩

https的搭建在那边就不提了,重假如亟需私钥、CSXC60证书具名和证件文件,感兴趣的同班能够掌握下(然则不打听的话在现网意况也用持续getUserMedia……)

上边是壹体化的前端代码

JavaScript

var a = document.getElementById(‘a’); var b =
document.getElementById(‘b’); var c = document.getElementById(‘c’);
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia; var gRecorder = null; var audio =
document.querySelector(‘audio’); var door = false; var ws = null;
b.onclick = function() { if(a.value === ”) { alert(‘请输入用户名’);
return false; } if(!navigator.getUserMedia) {
alert(‘抱歉您的装置无意大利语音聊天’); return false; }
SRecorder.get(function (rec) { gRecorder = rec; }); ws = new
WebSocket(“wss://x.x.x.x:888八”); ws.onopen = function() {
console.log(‘握手成功’); ws.send(‘user:’ + a.value); }; ws.onmessage =
function(e) { receive(e.data); }; document.onkeydown = function(e) {
if(e.keyCode === 陆伍) { if(!door) { gRecorder.start(); door = true; } }
}; document.onkeyup = function(e) { if(e.keyCode === 陆5) { if(door) {
ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door
= false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var
SRecorder = function(stream) { config = {}; config.sampleBits =
config.smapleBits || 八; config.sampleRate = config.sampleRate || (4肆拾0
/ 陆); var context = new 奥迪oContext(); var audioInput =
context.createMediaStreamSource(stream); var recorder =
context.createScriptProcessor(40九陆, 一, 1); var audioData = { size: 0
//录音文件长度 , buffer: [] //录音缓存 , inputSampleRate:
context.sampleRate //输入采集样品率 , inputSampleBits: 16 //输入采集样品数位 八,
1陆 , outputSampleRate: config.sampleRate //输出采样率 , oututSampleBits:
config.sampleBits //输出采集样品数位 八, 1六 , clear: function() { this.buffer
= []; this.size = 0; } , input: function (data) { this.buffer.push(new
Float3二Array(data)); this.size += data.length; } , compress: function ()
{ //合并压缩 //合并 var data = new Float32Array(this.size); var offset =
0; for (var i = 0; i < this.buffer.length; i++) {
data.set(this.buffer[i], offset); offset += this.buffer[i].length; }
//压缩 var compression = parseInt(this.inputSampleRate /
this.outputSampleRate); var length = data.length / compression; var
result = new Float32Array(length); var index = 0, j = 0; while (index
< length) { result[index] = data[j]; j += compression; index++; }
return result; } , encodeWAV: function () { var sampleRate =
Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits =
Math.min(this.inputSampleBits, this.oututSampleBits); var bytes =
this.compress(); var dataLength = bytes.length * (sampleBits / 八); var
buffer = new ArrayBuffer(4肆 + dataLength); var data = new
DataView(buffer); var channelCount = 一;//单声道 var offset = 0; var
writeString = function (str) { for (var i = 0; i < str.length; i++) {
data.setUint捌(offset + i, str.charCodeAt(i)); } }; // 能源调换文件标记符
writeString(‘福睿斯IFF’); offset += 肆; //
下个地方开始到文件尾总字节数,即文件大小-八 data.setUint3二(offset, 3陆 +
dataLength, true); offset += 四; // WAV文件申明 writeString(‘WAVE’);
offset += 肆; // 波形格式标记 writeString(‘fmt ‘); offset += 4; //
过滤字节,一般为 0x10 = 16 data.setUint3贰(offset, 1陆, true); offset += 四;
// 格式种类 (PCM方式采集样品数据) data.setUint1陆(offset, 1, true); offset +=
二; // 通道数 data.setUint1陆(offset, channelCount, true); offset += 二; //
采集样品率,每秒样本数,表示每一个通道的播报速度 data.setUint32(offset,
sampleRate, true); offset += 四; // 波形数据传输率 (每秒平均字节数)
单声道×每秒数据位数×每样本数据位/8 data.setUint3二(offset, channelCount
* sampleRate * (sampleBits / 捌), true); offset += 四; // 快数据调节数
采集样品2次占用字节数 单声道×每样本的多少位数/八 data.setUint1陆(offset,
channelCount * (sampleBits / 八), true); offset += 二; // 每样本数量位数
data.setUint1陆(offset, sampleBits, true); offset += 二; // 数据标记符
writeString(‘data’); offset += 四; // 采集样品数据总量,即数据总大小-44data.setUint32(offset, dataLength, true); offset += 四; // 写入采集样品数据
if (sampleBits === 八) { for (var i = 0; i < bytes.length; i++,
offset++) { var s = Math.max(-一, Math.min(一, bytes[i])); var val = s
< 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val +
32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i
< bytes.length; i++, offset += 2) { var s = Math.max(-1, Math.min(1,
bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s *
0x7FFF, true); } } return new Blob([data], { type: ‘audio/wav’ }); }
}; this.start = function () { audioInput.connect(recorder);
recorder.connect(context.destination); } this.stop = function () {
recorder.disconnect(); } this.getBlob = function () { return
audioData.encodeWAV(); } this.clear = function() { audioData.clear(); }
recorder.onaudioprocess = function (e) {
audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get =
function (callback) { if (callback) { if (navigator.getUserMedia) {
navigator.getUserMedia( { audio: true }, function (stream) { var rec =
new SRecorder(stream); callback(rec); }) } } } function receive(e) {
audio.src = window.URL.createObjectURL(e); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById(‘a’);
var b = document.getElementById(‘b’);
var c = document.getElementById(‘c’);
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector(‘audio’);
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === ”) {
        alert(‘请输入用户名’);
        return false;
    }
    if(!navigator.getUserMedia) {
        alert(‘抱歉您的设备无法语音聊天’);
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log(‘握手成功’);
        ws.send(‘user:’ + a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size += data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i++) {
                data.set(this.buffer[i], offset);
                offset += this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j += compression;
                index++;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 + dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i++) {
                    data.setUint8(offset + i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString(‘RIFF’); offset += 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 + dataLength, true); offset += 4;
            // WAV文件标志
            writeString(‘WAVE’); offset += 4;
            // 波形格式标志
            writeString(‘fmt ‘); offset += 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset += 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset += 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset += 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset += 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset += 2;
            // 数据标识符
            writeString(‘data’); offset += 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset += 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i++, offset++) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val + 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i++, offset += 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: ‘audio/wav’ });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}

注意:按住a键说话,放开a键发送

投机有尝试不开关实时对讲,通过setInterval发送,但发掘杂音有点重,效果倒霉,那几个要求encodeWAV再1层的包裹,多去除意况杂音的遵守,本身挑选了尤其简便易行的按钮说话的形式

 

那篇文章里第叁展望了websocket的前景,然后遵照标准大家本人尝试解析和转移数据帧,对websocket有了越来越深一步的问询

最终经过几个demo看到了websocket的潜在的力量,关于语音聊天室的demo涉及的较广,未有接触过奥迪(Audi)oContext对象的同桌最棒先精通下奥迪oContext

文章到这里就得了啦~有如何主张和难题迎接大家建议来一齐研讨探究~

 

1 赞 11 收藏 3
评论

金沙注册送58 6

相关文章

网站地图xml地图