使用Google-Speech- to -Text,我可以用默认参数转录音频剪辑。但是,在使用enable_speaker_diarization标签分析音频剪辑中的各个扬声器时,我收到错误消息。谷歌文档它here这是一个很长的识别音频剪辑,因此我使用异步请求,谷歌推荐的here
我的代码是-
def transcribe_gcs(gcs_uri):
from google.cloud import speech
from google.cloud import speech_v1 as speech
from google.cloud.speech import enums
from google.cloud.speech import types
client = speech.SpeechClient()
audio = types.RecognitionAudio(uri = gcs_uri)
config = speech.types.RecognitionConfig(encoding=speech.enums.RecognitionConfig.AudioEncoding.FLAC,
sample_rate_hertz= 16000,
language_code = 'en-US',
enable_speaker_diarization=True,
diarization_speaker_count=2)
operation = client.long_running_recognize(config, audio)
print('Waiting for operation to complete...')
response = operation.result(timeout=3000)
result = response.results[-1]
words_info = result.alternatives[0].words
for word_info in words_info:
print("word: '{}', speaker_tag: {}".format(word_info.word, word_info.speaker_tag))使用以下命令后-
transcribe_gcs('gs://bucket_name/filename.flac') 我得到了错误
ValueError: Protocol message RecognitionConfig has no "enable_speaker_diarization" field.我确信这与库有关,我已经使用了我能找到的所有变体,比如
from google.cloud import speech_v1p1beta1 as speech
from google.cloud import speech但我一直收到相同的错误。注意-在运行这段代码之前,我已经使用JSON文件进行了身份验证。
发布于 2019-01-22 01:05:25
speech.types.RecognitionConfig中的enable_speaker_diarization=True参数目前仅在库speech_v1p1beta1中可用,因此,您需要导入该库才能使用该参数,而不是默认的speech参数。我对你的代码做了一些修改,对我来说工作得很好。考虑到您需要使用服务帐户来运行此代码。
def transcribe_gcs(gcs_uri):
from google.cloud import speech_v1p1beta1 as speech
from google.cloud.speech_v1p1beta1 import enums
from google.cloud.speech_v1p1beta1 import types
client = speech.SpeechClient()
audio = types.RecognitionAudio(uri = gcs_uri)
config = speech.types.RecognitionConfig( language_code = 'en-US',enable_speaker_diarization=True, diarization_speaker_count=2)
operation = client.long_running_recognize(config, audio)
print('Waiting for operation to complete...')
response = operation.result(timeout=3000)
result = response.results[-1]
words_info = result.alternatives[0].words
tag=1
speaker=""
for word_info in words_info:
if word_info.speaker_tag==tag:
speaker=speaker+" "+word_info.word
else:
print("speaker {}: {}".format(tag,speaker))
tag=word_info.speaker_tag
speaker=""+word_info.word
print("speaker {}: {}".format(tag,speaker))结果应该是这样的:

发布于 2019-07-15 03:54:42
错误原因也与Node JS用户类似。通过此调用导入测试版功能,然后使用说话人识别功能。
const speech = require('@google-cloud/speech').v1p1beta1;发布于 2019-11-08 15:54:40
出现此错误是因为您尚未导入某些文件。为此,请导入以下文件。
from google.cloud import speech_v1p1beta1 as speech
from google.cloud.speech_v1p1beta1 import enums
from google.cloud.speech_v1p1beta1 import typeshttps://stackoverflow.com/questions/54271749
复制相似问题