我正在尝试在iOS Swift应用程序中实现语音识别。当用户点击“麦克风”按钮时,我正在尝试播放系统声音,然后使用SpeechKit进行语音识别。如果我注释掉SpeechKit代码,语音识别就能正常工作,声音也能正常播放。然而,当我把它们放在一起时,我听不到声音。此外,在语音识别完成后,我也听不到最后的声音。
代码如下:
@IBAction func listenButtonTapped(sender: UIBarButtonItem) {
let systemSoundID: SystemSoundID = 1113
AudioServicesPlaySystemSound (systemSoundID)
let session = SKSession(URL: NSURL(string: "nmsps://{my Nuance key}@sslsandbox.nmdp.nuancemobility.net:443"), appToken: "{my Nuance token}")
session.recognizeWithType(SKTransactionSpeechTypeDictation,
detection: .Long,
language: "eng-USA",
delegate: self)
}
func transaction(transaction: SKTransaction!, didReceiveRecognition recognition: SKRecognition!) {
var speechString = recognition.text
print(speechString!)
let systemSoundID: SystemSoundID = 1114
AudioServicesPlaySystemSound (systemSoundID)
}无论哪种方式,语音识别总是工作得很好。如果我把它注释掉,那么系统声音就能正常播放。
例如,每次我点击按钮时,下面的声音都会播放得很好:
@IBAction func listenButtonTapped(sender: UIBarButtonItem) {
let systemSoundID: SystemSoundID = 1113
AudioServicesPlaySystemSound (systemSoundID)
}我尝试了不同的队列,但都没有成功。我想我需要将SpeechKit代码移到某个类型的回调或闭包中,但不确定如何构造它。
发布于 2017-07-21 17:51:18
这里描述了这个问题的解决方案https://developer.apple.com/documentation/audiotoolbox/1405202-audioservicesplayalertsound
SpeechKit将录制类别添加到AVSession中,因此不再播放声音。你想要做的是:
let systemSoundID: SystemSoundID = 1113
//Change from record mode to play mode
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
try AVAudioSession.sharedInstance().setActive(true)
} catch let error as NSError {
print("Error \(error)")
}
AudioServicesPlaySystemSoundWithCompletion(systemSoundID) {
//do the recognition
} https://stackoverflow.com/questions/37288142
复制相似问题