search

Latest posts


I just wanted to let everyone know about a paper that my colleague Heather Crawford and I just published at the WAY 2019 workshop on Quantum Authentication Research Directions.


I thought I would take a little time to talk about Quantum Teleportation and show you how you can run your own Quantum Teleportation experiments using IBM’s Quantum Experience. So I decided to write up an article on Medium.com where I explain the physics behind teleportation and walk you through the QISKit code. Please take a look and share your thoughts with me on Quantum Computing.


Here is a link to a recent article that I put together that explains Quantum Superdense Coding with some examples you can run on IBM Q.


In this blog post, I am going to quickly explain how you can train Watson Speech to Text to handle misunderstood words. As we will see it is very easy to train Watson Speech to Text. The first thing that we will need to do is create a training file. The typical workflow that I follow is to first let Watson Speech to Text process my audio stream using one of the default models, e.g., broadband or narrowband and then see which words or phrases Watson Speech to Text had trouble recognizing.

Once you have figured out which words Watson had trouble with you then can create the training words to handle those words. The training words are a simple JSON payload that contains an entry for each individual misunderstood word or phrase. In the example below my training words contains entries for two custom words: evolve and explore.

[
    {
        "word": "evolve",
        "sounds_like": ["appaled", "evolve"],
        "display_as": "evolve"
    },
    {
        "word": "explore",
        "sounds_like": ["floor", "explore"],
        "display_as": "explore"
    }
]

You will notice that in the training words I have listed the ways in which these words sound like other words. I have also provided the words that should be displayed when Watson uses these custom word definitions during speech recognition.

The next thing that you will need to do is to create a custom model to upload the training words to. In the example below I am creating a custom model based on the US English broadband model through a simple CURL command. Be sure to fill in your username and password.

curl -X POST -u “{username}”:”{password}”
–header “Content-Type: application/json”
–data “{\”name\”: \”My model\”,
\”base_model_name\”: \”en-US_BroadbandModel\”,
\”description\”: \”My custom language model\”}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations”

Once you have created the custom model you will then add the training words to the custom model. You can do that through a CURL command as well. Be sure to once again put in your username, password, and the customization id from the last CURL command.

curl -X POST -u “{username}”:”{password}”
–header “Content-Type: application/json”
–data “{\”words\”:
[{\”word\”: \”evolve\”, \”sounds_like\”: [\”appaled\”, \”evolve\”], \”display_as\”: \”evolve\”},
{\”word\”: \”explore\”, \”sounds_like\”: [\”floor\”, \”explore\”], \”display_as\”: “\explore\”}]}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/words”

After you have uploaded your training words you will need to check the status of your model to verify that it is ready to be trained. You do that by issuing the following CURL command. Be sure to supply the username and password.

curl -X GET -u “{username}”:”{password}”
“https://stream.watsonplatform.net/speech-to-text/api/v1/customizations”

After you make this call you need to check the status and verify that it is in the ready state. Once it is in the ready state, then you can proceed to training.

To train Watson Speech to Text with this new customized model. You can easily do this through a CURL command as well. Just like before make sure you provide your username, password, and customization id.

curl -X POST -u “{username}”:”{password}” “https://stream.watsonplatform.net/speech-to-text/api/v1/customizations/{customization_id}/train”

After you have trained Watson Speech to Text you need to once again check the status of your model and verify that it is in the available state before attempting to use the customized model for recognizing speech.

If you want to get more detailed information on the Watson Speech to Text APIs follow this link to the API reference.


In this video, I show you how to use the Globalization Pipeline service on Bluemix to translate caption files in SubRip format.


Here is a video that I put together to show you how to use the Subtitler utility for generating caption files for videos using Watson Speech to Text


In this post I am going to show how you can use a set of utilities that I have put together to automatically create a SubRip .srt file from a .mp4 video using Watson Speech to Text and then translate the SubRip file into other languages using the Globalization Pipeline service on Bluemix. Here is a link to the utilities out on GitHub.

The first thing you will need to do is create a Bluemix account and create instances for both Watson Speech to Text and Globalization pipeline services. Once you have your service instances created copy the credentials for the services and place them in the credentials files in the cloned GitHub repo.

To create a SubRip file all you need to do is to call the subtitler utility and it will automatically create a SubRip file for you. I have also included a utility called segmenter that will attempt to take the raw captured text from subtitler and add proper English punctuation. When you are ready to translate your SubRip file into other languages simply use the translator utility.

There are a few things to consider when using these utilities:

  1. Very lengthy video files could take a rather long time to process, because the subtitler utility does not perform any parallelization when calling Watson Speech to Text.
  2. I have placed a limit of translating no more than 1,000 subtitles per file when using the translator utility, because the Globalization Pipeline service only allows 1,000 strings per resource bundle. The source code could be updated to automatically create additional bundles when processing large SubRip files.

I have tested the utilities with videos up to about 20 minutes in length. I will be uploading a YouTube video later that will walk you through all the steps, but this post should be enough to get you started.


As development teams build and deploy their cloud applications, they often struggle with how to get their applications quickly translated into multiple languages to capture the global market. Late last year IBM released the Globalization Pipeline service on IBM Bluemix to help developers integrate the use machine translation into their DevOps processes to quickly translate their applications. IBM has now extended the service to not only include the use of machine translation, but also to be able to gain access to professional translators that can then review and edit your machine translated strings. You can learn more about the service at this blog post and by watching this video.


Occasionally you may encounter a need to define an action in OpenWhisk where you have to perform a number of steps in a specific order and return the result asynchronously. This is rather simple to achieve in OpenWhisk by using async.series wrapped in a Promise. The following code example shows you how to do this. The key aspects to keep in mind are that in each series function you call the callback and indicate whether or not the current function has succeeded or encountered an error. Once all the functions in the series have been called the last function is called and it is at this point where you handle the promise. In the event an error has occurred somewhere the reject function will get called and if the function calls in the series were successful, then the resolve function will be called.

// Using async with promises
function main(params) {

    var async = require("async");

    return new Promise(function (resolve, reject) {
        async.series([
            function (callback) {
                var condition = true;
                var error = {message: 'failed'};
                var data = {value: 'myValue'};

                // Do some stuff
                if (condition) {
                    callback(null, data);
                }
                // error
                else {
                    callback(error, null);
                }
            },
            function (callback) {
                    // Do more stuff here
            }
        ],
        function (error, results) {
            if (error) {
                reject({"error": error.message});
            } 
            else {
                resolve({"results": results.value});
            }
        });
    });
}

My colleague Tracy De Cicco and I just published our thoughts on where digital advertising is headed on LinkedIn. In the article we give an historic overview of digital advertising and then introduce how new cognitive services can be used to dynamically tune ad copy for more effective uptake.


‹ previous posts
close
search
Latest Tweets

Hi, guest!

settings

menu