我正在尝试在我的对话机器人中添加使用直接线路语音通道的功能。我正在阅读微软的教程,关于如何做到这一点,但他们只是使用回声机器人。我希望能够使用对话机器人,并返回声音。我已经在azure中创建了一个语音资源,并在azure上的bot资源中实现了直接线路语音通道。有没有人成功地将语音添加到对话机器人中?我读到有语音提示选项,但我在PromptOptions对象中找不到该属性。
发布于 2020-11-17 07:50:49
语音的配置方式取决于你打算使用的类型,这也意味着可能会更新你的机器人以及你正在使用的客户端。
关于客户端(即通道)的快速说明-通道是决定是否支持语音的因素。例如:
的功能,位于BotBuilder-Samples的示例包含此代码
文档
关于DL Speech,您需要添加/更新您的机器人的index.js代码,以包含以下内容:
[...]
// Catch-all for errors.
const onTurnErrorHandler = async (context, error) => {
// This check writes out errors to console log .vs. app insights.
// NOTE: In production environment, you should consider logging this to Azure
// application insights. See https://aka.ms/bottelemetry for telemetry
// configuration instructions.
console.error(`\n [onTurnError] unhandled error: ${ error }`);
// Send a trace activity, which will be displayed in Bot Framework Emulator
await context.sendTraceActivity(
'OnTurnError Trace',
`${ error }`,
'https://www.botframework.com/schemas/error',
'TurnError'
);
// Send a message to the user
await context.sendActivity('The bot encountered an error or bug.');
await context.sendActivity('To continue to run this bot, please fix the bot source code.');
};
// Set the onTurnError for the singleton BotFrameworkAdapter.
adapter.onTurnError = onTurnErrorHandler;
[...]
// Listen for Upgrade requests for Streaming.
server.on('upgrade', (req, socket, head) => {
// Create an adapter scoped to this WebSocket connection to allow storing session data.
const streamingAdapter = new BotFrameworkAdapter({
appId: process.env.MicrosoftAppId,
appPassword: process.env.MicrosoftAppPassword
});
// Set onTurnError for the BotFrameworkAdapter created for each connection.
streamingAdapter.onTurnError = onTurnErrorHandler;
streamingAdapter.useWebSocket(req, socket, head, async (context) => {
// After connecting via WebSocket, run this logic for every request sent over
// the WebSocket connection.
await myBot.run(context);
});
});然后,在网络聊天中,您将传入以下内容。(您可以在此DL Speech sample中参考以下代码。另外,请注意,您需要将"fetch“地址更新为您自己的API以生成令牌。):
[...]
const fetchCredentials = async () => {
const res = await fetch('https://webchat-mockbot-streaming.azurewebsites.net/speechservices/token', {
method: 'POST'
});
if (!res.ok) {
throw new Error('Failed to fetch authorization token and region.');
}
const { region, token: authorizationToken } = await res.json();
return { authorizationToken, region };
};
// Create a set of adapters for Web Chat to use with Direct Line Speech channel.
const adapters = await window.WebChat.createDirectLineSpeechAdapters({
fetchCredentials
});
// Pass the set of adapters to Web Chat.
window.WebChat.renderWebChat(
{
...adapters
},
document.getElementById('webchat')
);
[...]以下是一些额外的资源,可以帮助您更好地理解DL语音:
时,它将有助于减少语音错误
关于CS Speech,您需要有一个活动的Cognitive Services subscription。一旦你在Azure中设置了你的语音服务,你就可以使用订阅密钥来生成用于启用CS语音的令牌(你也可以参考这个网络聊天sample。不需要对机器人进行更改即可启用。(同样,您需要设置一个用于生成令牌的API,因为最佳实践是不在HTML中包含任何键。这就是我在这个例子中获取DL令牌的方法):
let authorizationToken;
let region = '<<SPEECH SERVICES REGION>>';
const response = await fetch( `https://${ region }.api.cognitive.microsoft.com/sts/v1.0/issueToken`, {
method: 'POST',
headers: {
'Ocp-Apim-Subscription-Key': '<<SUBSCRIPTION KEY>>'
}
} );
if ( response.status === 200 ) {
authorizationToken = await response.text(),
region
} else {
console.log( 'error' )
}
const webSpeechPonyfillFactory = await window.WebChat.createCognitiveServicesSpeechServicesPonyfillFactory( {
authorizationToken,
region
} );
const res = await fetch( 'http://localhost:3500/directline/token', { method: 'POST' } );
const { token } = await res.json();
window.WebChat.renderWebChat(
{
directLine: window.WebChat.createDirectLine( {
token: token
} ),
webSpeechPonyfillFactory: webSpeechPonyfillFactory,
},
document.getElementById( 'webchat' )
);其他资源:
希望能帮上忙!
https://stackoverflow.com/questions/64358991
复制相似问题