我在尝试实现微软提供的关于如何使用SpeechRecognitionEngine (https://msdn.microsoft.com/en-us/library/system.speech.recognition.speechrecognitionengine.speechdetected(v=vs.110).aspx )的示例时遇到了一些重大问题。
下面是示例代码:
using System;
using System.Speech.Recognition;
namespace SampleRecognition
{
class Program
{
static void Main(string[] args)
// Initialize an in-process speech recognition engine.
{
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine())
{
// Create a grammar.
Choices cities = new Choices(new string[] {
"Los Angeles", "New York", "Chicago", "San Francisco", "Miami", "Dallas" });
GrammarBuilder gb = new GrammarBuilder();
gb.Culture = new System.Globalization.CultureInfo("en-GB");
gb.Append("I would like to fly from");
gb.Append(cities);
gb.Append("to");
gb.Append(cities);
// Create a Grammar object and load it to the recognizer.
Grammar g = new Grammar(gb);
g.Name = ("City Chooser");
recognizer.LoadGrammarAsync(g);
// Attach event handlers.
recognizer.LoadGrammarCompleted +=
new EventHandler<LoadGrammarCompletedEventArgs>(recognizer_LoadGrammarCompleted);
recognizer.SpeechDetected +=
new EventHandler<SpeechDetectedEventArgs>(recognizer_SpeechDetected);
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Set the input to the recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start recognition.
recognizer.RecognizeAsync();
// Keep the console window open.
Console.ReadLine();
}
}
// Handle the SpeechDetected event.
static void recognizer_SpeechDetected(object sender, SpeechDetectedEventArgs e)
{
Console.WriteLine(" Speech detected at AudioPosition = {0}", e.AudioPosition);
}
// Handle the LoadGrammarCompleted event.
static void recognizer_LoadGrammarCompleted(object sender, LoadGrammarCompletedEventArgs e)
{
Console.WriteLine("Grammar loaded: " + e.Grammar.Name);
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine(" Speech recognized: " + e.Result.Text);
}
}
}似乎什么都不起作用,我尝试了许多不同的例子,花了一整天的时间试图让它发挥作用。例如,如果我将RecognizeAsync()与EmulateRecognizeAsync(“我想从芝加哥飞往迈阿密”)交换,它就会像预期的那样工作。但是这个程序似乎没有从我的麦克风中得到任何输入。
下面是一些更多的细节:
我是不是遗漏了什么?我需要更好的麦克风吗?除了让麦克风成为默认麦克风之外,我还需要设置硬件吗?我是否需要为Windows 8使用不同的库?我没有主意了。
谢谢你的进阶!
发布于 2015-07-07 16:26:57
原来-内置在笔记本电脑里的麦克风根本不能用于Windows语音识别.
我今天刚收到一台新耳机,一切都完美无缺。我仍然收到消息
ConsoleApplication1.vshost.exe Information: 0 : SAPI does not implement phonetic alphabet selection....but似乎一切都正常。
https://stackoverflow.com/questions/31256176
复制相似问题