“400 RecognitionAudio not set” & “InactiveRpcError” [Google text to speech API]

173
January 31, 2020, at 2:30 PM

I would like to realize this.

  1. A user speaks to a web browser.
  2. A web browser record his voice as WAV file(Recorder.js) and send it to a server (Google App Engine Standard environment Python 3.7).
  3. Python server calls Google Cloud text to speech API and transcribe WAV file and send the transcribed text to the web browser.

I got this error message.

2020-01-30 08:37:38 speech[20200130t173543]  "GET / HTTP/1.1" 200
2020-01-30 08:37:38 speech[20200130t173543]  [2020-01-30 08:37:38 +0000] [8] [INFO] Starting gunicorn 20.0.4
2020-01-30 08:37:38 speech[20200130t173543]  [2020-01-30 08:37:38 +0000] [8] [INFO] Listening at: http://0.0.0.0:8081 (8)
2020-01-30 08:37:38 speech[20200130t173543]  [2020-01-30 08:37:38 +0000] [8] [INFO] Using worker: sync
2020-01-30 08:37:38 speech[20200130t173543]  [2020-01-30 08:37:38 +0000] [15] [INFO] Booting worker with pid: 15
2020-01-30 08:37:55 speech[20200130t173543]  "POST / HTTP/1.1" 500
2020-01-30 08:37:56 speech[20200130t173543]  /tmp/file.wav exists
2020-01-30 08:37:56 speech[20200130t173543]  [2020-01-30 08:37:56,717] ERROR in app: Exception on / [POST]
2020-01-30 08:37:56 speech[20200130t173543]  Traceback (most recent call last):    
File "/env/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable  return callable_(*args, **kwargs)    
File "/env/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__   return _end_unary_response_blocking(state, call, False, None)    
File "/env/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking   raise _InactiveRpcError(state)  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
2020-01-30 08:37:56 speech[20200130t173543]     status = StatusCode.INVALID_ARGUMENT
2020-01-30 08:37:56 speech[20200130t173543]     details = "RecognitionAudio not set."
2020-01-30 08:37:56 speech[20200130t173543]     debug_error_string = "{"created":"@1580373476.716586092","description":"Error received from peer ipv4:172.217.175.42:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"RecognitionAudio not set.","grpc_status":3}"
2020-01-30 08:37:56 speech[20200130t173543]  >
2020-01-30 08:37:56 speech[20200130t173543]
2020-01-30 08:37:56 speech[20200130t173543]  The above exception was the direct cause of the following exception:
2020-01-30 08:37:56 speech[20200130t173543]
2020-01-30 08:37:57 speech[20200130t173543]  Traceback (most recent call last):    
File "/env/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app      response = self.full_dispatch_request()    
File "/env/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request      rv = self.handle_user_exception(e)    
File "/env/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception      reraise(exc_type, exc_value, tb)    
File "/env/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise      raise value    
File "/env/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request      rv = self.dispatch_request()    
File "/env/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request      return self.view_functions[rule.endpoint](**req.view_args)    
File "/srv/main.py", line 38, in index      response = client.recognize(config, audio)    
File "/env/lib/python3.7/site-packages/google/cloud/speech_v1/gapic/speech_client.py", line 256, in recognize      request, retry=retry, timeout=timeout, metadata=metadata    
File "/env/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__      return wrapped_func(*args, **kwargs)    
File "/env/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func      on_error=on_error,    
File "/env/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target      return target()    
File "/env/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout      return func(*args, **kwargs)    
File "/env/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable      six.raise_from(exceptions.from_grpc_error(exc), exc)    
File "<string>", line 3, in raise_from  google.api_core.exceptions.InvalidArgument: 400 RecognitionAudio not set.

I thought there are two problems. The first one is this.

2020-01-30 08:37:56 speech[20200130t173543]  Traceback (most recent call last):    
File "/env/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable  return callable_(*args, **kwargs)    
File "/env/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__   return _end_unary_response_blocking(state, call, False, None)    
File "/env/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking   raise _InactiveRpcError(state)  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:

I searched "InactiveRpcError google cloud speech api", but I couldn't find a solution.

The second one is this.

2020-01-30 08:37:57 speech[20200130t173543]  Traceback (most recent call last):    
File "/env/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app      response = self.full_dispatch_request()    
File "/env/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request      rv = self.handle_user_exception(e)    
File "/env/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception      reraise(exc_type, exc_value, tb)    
File "/env/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise      raise value    
File "/env/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request      rv = self.dispatch_request()    
File "/env/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request      return self.view_functions[rule.endpoint](**req.view_args)    
File "/srv/main.py", line 38, in index      response = client.recognize(config, audio)    
File "/env/lib/python3.7/site-packages/google/cloud/speech_v1/gapic/speech_client.py", line 256, in recognize      request, retry=retry, timeout=timeout, metadata=metadata    
File "/env/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__      return wrapped_func(*args, **kwargs)    
File "/env/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func      on_error=on_error,    
File "/env/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target      return target()    
File "/env/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout      return func(*args, **kwargs)    
File "/env/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable      six.raise_from(exceptions.from_grpc_error(exc), exc)    
File "<string>", line 3, in raise_from  google.api_core.exceptions.InvalidArgument: 400 RecognitionAudio not set.

I searched "InvalidArgument: 400 RecognitionAudio not set". I found a solution to change sample_rate_hertz=16000. So, I changed it to "48000", but got the same error. Also, I removed sample_rate_hertz=16000, but got the same error.

Could you give me any information or suggestion?

Thank you in advance.

Sincerely, Kazu

My directory structure is here.

.
├── app.yaml
├── credentials.json
├── main.py
├── requirements.txt
├── static
│   └── js
│       └── app.js
└── templates
    └── index.html

This is app.yaml.

runtime: python37
entrypoint: gunicorn -b :$PORT main:app
service: speech

This is main.py.

#!/usr/bin/env python
# -*- coding: utf-8 -*-
from flask import Flask
from flask import request
from flask import render_template
from flask import send_file
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
import os
import io
app = Flask(__name__)
@app.route("/", methods=['POST', 'GET'])
def index():
    if request.method == "POST":
        f = open('/tmp/file.wav', 'wb')
        f.write(request.data)
        f.close()
        if os.path.isfile('/tmp/file.wav'):
            print("/tmp/file.wav exists")
        os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="credentials.json"
        client = speech.SpeechClient()
        # [START speech_python_migration_sync_request]
        # [START speech_python_migration_config]
        with io.open('/tmp/file.wav', 'rb') as audio_file:
            content = audio_file.read()
        audio = types.RecognitionAudio(content=content)
        config = types.RecognitionConfig(
            encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
            sample_rate_hertz=16000,
            language_code='ja-JP')
        # [END speech_python_migration_config]
        # [START speech_python_migration_sync_response]
        response = client.recognize(config, audio)
        # [END speech_python_migration_sync_request]
        # Each result is for a consecutive portion of the audio. Iterate through
        # them to get the transcripts for the entire audio file.
        for result in response.results:
            # The first alternative is the most likely one for this portion.
            print(u'Transcript: {}'.format(result.alternatives[0].transcript))
        return print(u'Transcript: {}'.format(result.alternatives[0].transcript))    
    else:
        return render_template("index.html")
if __name__ == "__main__":
    app.run()

This is requirements.txt.

Flask
google-cloud-speech
gunicorn

This is index.html.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="UTF-8">
    <title>Simple Recorder.js demo with record, stop and pause - addpipe.com</title>
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
  </head>
  <body>
    <h1>Simple Recorder.js demo</h1>
    <div id="controls">
     <button id="recordButton">Record</button>
     <button id="pauseButton" disabled>Pause</button>
     <button id="stopButton" disabled>Stop</button>
    </div>
    <div id="formats">Format: start recording to see sample rate</div>
    <p><strong>Recordings:</strong></p>
    <ol id="recordingsList"></ol>
    <!-- inserting these scripts at the end to be able to use all the elements in the DOM -->
    <script src="https://cdn.rawgit.com/mattdiamond/Recorderjs/08e7abd9/dist/recorder.js"></script>
    <script src="/static/js/app.js"></script>
  </body>
</html>

This is app.js.

//webkitURL is deprecated but nevertheless
URL = window.URL || window.webkitURL;
var gumStream;                      //stream from getUserMedia()
var rec;                            //Recorder.js object
var input;                          //MediaStreamAudioSourceNode we'll be recording
// shim for AudioContext when it's not avb. 
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContext //audio context to help us record
var recordButton = document.getElementById("recordButton");
var stopButton = document.getElementById("stopButton");
var pauseButton = document.getElementById("pauseButton");
//add events to those 2 buttons
recordButton.addEventListener("click", startRecording);
stopButton.addEventListener("click", stopRecording);
pauseButton.addEventListener("click", pauseRecording);
function startRecording() {
    console.log("recordButton clicked");
    /*
        Simple constraints object, for more advanced audio features see
        https://addpipe.com/blog/audio-constraints-getusermedia/
    */
    var constraints = { audio: true, video:false }
    /*
        Disable the record button until we get a success or fail from getUserMedia() 
    */
    recordButton.disabled = true;
    stopButton.disabled = false;
    pauseButton.disabled = false
    /*
        We're using the standard promise based getUserMedia() 
        https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia
    */
    navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
        console.log("getUserMedia() success, stream created, initializing Recorder.js ...");
        /*
            create an audio context after getUserMedia is called
            sampleRate might change after getUserMedia is called, like it does on macOS when recording through AirPods
            the sampleRate defaults to the one set in your OS for your playback device
        */
        audioContext = new AudioContext();
        //update the format 
        document.getElementById("formats").innerHTML="Format: 1 channel pcm @ "+audioContext.sampleRate/1000+"kHz"
        /*  assign to gumStream for later use  */
        gumStream = stream;
        /* use the stream */
        input = audioContext.createMediaStreamSource(stream);
        /* 
            Create the Recorder object and configure to record mono sound (1 channel)
            Recording 2 channels  will double the file size
        */
        rec = new Recorder(input,{numChannels:1})
        //start the recording process
        rec.record()
        console.log("Recording started");
    }).catch(function(err) {
        //enable the record button if getUserMedia() fails
        recordButton.disabled = false;
        stopButton.disabled = true;
        pauseButton.disabled = true
    });
}
function pauseRecording(){
    console.log("pauseButton clicked rec.recording=",rec.recording );
    if (rec.recording){
        //pause
        rec.stop();
        pauseButton.innerHTML="Resume";
    }else{
        //resume
        rec.record()
        pauseButton.innerHTML="Pause";
    }
}
function stopRecording() {
    console.log("stopButton clicked");
    //disable the stop button, enable the record too allow for new recordings
    stopButton.disabled = true;
    recordButton.disabled = false;
    pauseButton.disabled = true;
    //reset button just in case the recording is stopped while paused
    pauseButton.innerHTML="Pause";
    //tell the recorder to stop the recording
    rec.stop();
    //stop microphone access
    gumStream.getAudioTracks()[0].stop();
    //create the wav blob and pass it on to createDownloadLink
    rec.exportWAV(createDownloadLink);
}
function createDownloadLink(blob) {
    var url = URL.createObjectURL(blob);
    var au = document.createElement('audio');
    var li = document.createElement('li');
    var link = document.createElement('a');
    //name of .wav file to use during upload and download (without extendion)
    var filename = new Date().toISOString();
    //add controls to the <audio> element
    au.controls = true;
    au.src = url;
    //save to disk link
    link.href = url;
    link.download = filename+".wav"; //download forces the browser to donwload the file using the  filename
    link.innerHTML = "Save to disk";
    //add the new audio element to li
    li.appendChild(au);
    //add the filename to the li
    li.appendChild(document.createTextNode(filename+".wav "))
    //add the save to disk link to li
    li.appendChild(link);
    //upload link
    var upload = document.createElement('a');
    upload.href="#";
    upload.innerHTML = "Upload";
    upload.addEventListener("click", function(event){
          var xhr=new XMLHttpRequest();
          xhr.onload=function(e) {
              if(this.readyState === 4) {
                  console.log("Server returned: ",e.target.responseText);
              }
          };
          var fd=new FormData();
          fd.append("audio_data",blob, filename);
          xhr.open("POST","/",true);
          xhr.send(fd);
    })
    li.appendChild(document.createTextNode (" "))//add a space in between
    li.appendChild(upload)//add the upload link to li
    //add the li element to the ol
    recordingsList.appendChild(li);
}
Rent Charter Buses Company
READ ALSO
Complex strings of array sorting using Javascript [duplicate]

Complex strings of array sorting using Javascript [duplicate]

I have a requirement to sort array of strings which contains numbers, letters and symbols in which the strings starting with letters will be on top followed by numbers and then symbols

236
How to specify separators in tolocalestring()

How to specify separators in tolocalestring()

I am using vue-js and I have this code to show prices

216
Generate Sequence Datetime using javascript

Generate Sequence Datetime using javascript

I'm new to javascript, but I need to generate a sequence of minutes from the system timeFor example, I need a function that takes the current system time, and generates a vector of the last 60 minutes of that hour, v [21:00, 20:59, 20:58, 20:57,

144
How do i convert this string into an array? [duplicate]

How do i convert this string into an array? [duplicate]

I need to convert this string into a an array into the format shown below

141