我一直在努力跟踪本文中的例子。
因为我不是试图在服务中实现这一点,而是在一个标准活动中实现这一点,所以我还没有经历过上述员额中描述的问题。
然而,我一直得到“无语音结果”--正如在该帖子中实现的那样,当getStringArrayList(RecognizerIntent.EXTRA_RESULTS)返回null时。
显然,我遗漏了一些需要做的事情,除了之外,还有
recognizer.setRecognitionListener(listener);
recognizer.startListening(intent); 我遗漏了什么?
除了startListening()之外,还可能需要startActivityForResult()吗?如果是这样的话,我已经尝试过了,但是它调用了Google的整个语音搜索活动(这正是我想要避免的,就像@vladimir.vivien写的这里一样)。这会造成更多的问题,因为两个识别器同时运行.
起初,我认为缺少的是对谷歌服务器的实际提交,但是当我从语音识别会话开始到结束(见下面)检查LogCat输出时,我看到它实际上使用http://www.google.com/m/voice-search创建了一个TCP会话。
所以很明显的问题是我错过了什么?
04-18 07:02:17.770: INFO/RecognitionController(623): startRecognition(#Intent;action=android.speech.action.RECOGNIZE_SPEECH;S.android.speech.extra.LANGUAGE_MODEL=free_form;S.android.speech.extra.PROMPT=LEARNSR;S.calling_package=com.example.learnsr.SrActivity;end)
04-18 07:02:17.770: INFO/RecognitionController(623): State change: STARTING -> STARTING
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Switching audio device to
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Switching audio device to
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.
04-18 07:02:17.780: INFO/AudioService(164): AudioFocus requestAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:17.780: DEBUG/AudioFlinger(121): setParameters(): io 3, keyvalue routing=262144;vr_mode=1, tid 155, calling tid 121
04-18 07:02:17.790: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.790: INFO/AudioHardwareQSD(121): do input routing device 40000
04-18 07:02:17.790: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.790: INFO/RecognitionController(623): State change: STARTING -> RECOGNIZING
04-18 07:02:17.790: INFO/ServerConnectorImpl(623): Starting TCP session, url=http://www.google.com/m/voice-search
04-18 07:02:17.930: DEBUG/ServerConnectorImpl(623): Created session a7918495c042db1746d3e09514baf621
04-18 07:02:17.930: INFO/ServerConnectorImpl(623): Creating TCP connection to 74.125.115.126:19294
04-18 07:02:17.980: DEBUG/AudioHardwareQSD(121): Switching audio device to
04-18 07:02:17.980: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:18.070: INFO/ServerConnectorImpl(623): startRecognize RecognitionParameters{session=a7918495c042db1746d3e09514baf621,request=1}
04-18 07:02:18.390: INFO/RecognitionController(623): onReadyForSpeech, noise level:10.29969, snr:-0.42756215
04-18 07:02:19.760: DEBUG/dalvikvm(659): GC_EXPLICIT freed 5907 objects / 353648 bytes in 67ms
04-18 07:02:21.030: INFO/AudioHardwareQSD(121): AudioHardware pcm playback is going to standby.
04-18 07:02:24.090: INFO/RecognitionController(623): onBeginningOfSpeech
04-18 07:02:24.760: DEBUG/dalvikvm(669): GC_EXPLICIT freed 1141 objects / 74296 bytes in 48ms
04-18 07:02:25.080: INFO/RecognitionController(623): onEndOfSpeech
04-18 07:02:25.080: INFO/AudioService(164): AudioFocus abandonAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:25.140: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:25.200: INFO/RecognitionController(623): State change: RECOGNIZING -> RECOGNIZED
04-18 07:02:25.200: INFO/RecognitionController(623): Final state: RECOGNIZED
04-18 07:02:25.260: INFO/ServerConnectorImpl(623): ClientReport{session_id=a7918495c042db1746d3e09514baf621,request_id=1,application_id=intent-speech-api,client_perceived_request_status=0,request_ack_latency_ms=118,total_latency_ms=7122,user_perceived_latency_ms=116,network_type=1,endpoint_trigger_type=3,}
04-18 07:02:25.260: INFO/AudioService(164): AudioFocus abandonAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:25.270: DEBUG/AudioHardwareQSD(121): Switching audio device to
04-18 07:02:25.270: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:25.270: INFO/RecognitionController(623): State change: RECOGNIZED -> PAUSED
04-18 07:02:25.270: INFO/AudioService(164): AudioFocus abandonAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:25.270: INFO/ClientReportSender(623): Sending 1 client reports over HTTP
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.
04-18 07:02:25.280: DEBUG/AudioFlinger(121): setParameters(): io 3, keyvalue routing=0, tid 155, calling tid 121
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:25.280: DEBUG/AudioHardwareQSD(121): Switching audio device to
04-18 07:02:25.280: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.发布于 2011-04-18 14:51:42
根据侦听器的文档,您需要从提供给onResults()的包中用SpeechRecognizer.RESULTS_RECOGNITION请求结果。你试过吗?
在使用RecognizerIntent.EXTRA_RESULTS意图时将使用RECOGNIZE_SPEECH。
发布于 2013-11-21 08:59:15
这段代码工作得很好:
package com.example.android.voicerecognitionservice;
import java.util.ArrayList;
import android.app.Activity;
import android.content.Context;
import android.content.Intent;
import android.media.AudioManager;
import android.os.Bundle;
import android.speech.RecognitionListener;
import android.speech.RecognizerIntent;
import android.speech.SpeechRecognizer;
import android.widget.TextView;
import android.widget.Toast;
public class VoiceRecognitionSettings extends Activity implements RecognitionListener {
/** Text display */
private TextView blurb;
/** Parameters for recognition */
private Intent recognizerIntent;
/** The ear */
private SpeechRecognizer recognizer;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.speech);
blurb = (TextView) findViewById(R.id.text1);
// muteSystemAudio();
recognizer = SpeechRecognizer.createSpeechRecognizer(this);
recognizer.setRecognitionListener(this);
recognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, "com.example.android.voicerecognitionservice");
recognizerIntent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS, true);
recognizer.startListening(recognizerIntent);
}
@Override
public void onBeginningOfSpeech() {
blurb.append("[");
}
@Override
public void onBufferReceived(byte[] arg0) {
}
@Override
public void onEndOfSpeech() {
blurb.append("] ");
}
@Override
public void onError(int arg0) {
}
@Override
public void onEvent(int arg0,
Bundle arg1) {
}
@Override
public void onPartialResults(Bundle arg0) {
}
@Override
public void onReadyForSpeech(Bundle arg0) {
blurb.append("> ");
}
@Override
public void onResults(Bundle bundle) {
ArrayList<String> results = bundle.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
blurb.append(results.toString() + "\n");
// if (results!=null){
// Toast.makeText(VoiceRecognitionSettings.this,results.toString()+"55", Toast.LENGTH_LONG).show();
//
// }else{
// Toast.makeText(VoiceRecognitionSettings.this,"vide", Toast.LENGTH_LONG).show();
//
// }
recognizer.startListening(recognizerIntent);
}
@Override
public void onRmsChanged(float arg0) {
}
public void muteSystemAudio(){
AudioManager amanager=(AudioManager)getSystemService(Context.AUDIO_SERVICE);
amanager.setStreamMute(AudioManager.STREAM_SYSTEM, true);
}
}只是测试一下他
发布于 2011-04-18 14:39:15
我不会直接回答你的问题,但我建议尝试以不同的方式实现你想要的功能。
见satur99号的评论。为什么你需要写一个语音识别器类?另一个人试图把它作为一种服务,但是因为你是从一个活动中做的,所以你可以很容易地激发你的意图。这将为你节省大量的精力。
以下是来自google的两个api教程链接(im刚刚重新发布):
http://developer.android.com/resources/articles/speech-input.html
http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html
https://stackoverflow.com/questions/5702576
复制相似问题