Trending Technology Machine Learning, Artificial Intelligent, Block Chain, IoT, DevOps, Data Science

Recent Post

Codecademy Code Foundations

Search This Blog

Audio Analysis Using Deep Learning

In this Deep Learning blog, we will study Audio Analysis using Deep Learning. Also, will learn data handling in the audio domain with applications of audio processing. As we will use graphs for a better understanding of audio data Analysis.

Introduction to Audio Analysis

As we are always in contact with audio. Sometimes directly or maybe indirectly. As our brain works continuously. Thus, brain process and understands the information. And at last, it provides us information about the environment.
Sometimes we catch this audio floating around us and feel something constructive. As there are some devices which help to catch these sounds. Also represents in computer readable format.
Examples of these formats are:

wav (Waveform Audio File) format mp3 (MPEG-1 Audio Layer 3) format WMA (Windows Media Audio) format

 Data Handling in Audio Domain

As there are present some unstructured data formats. For that audio data, has a couple of preprocessing steps. That we need to follow before it is presented for audio analysis.

Firstly we have to load data into a machine-understandable format. For this, we simply take values after every specific time steps.

For example – In a 2-second audio file, we extract values at half a second. This is called a sampling of audio data, and the rate at which it is sampled is called the sampling rate.

We can represent it in another way. As we can convert data into a different domain, namely frequency domain. When we sample an audio data, we require much more data points to represent the whole data. Also, the sampling rate should be as high as possible.
So, if we represent audio data in frequency domain. Then much less computational space is required. To get an intuition, take a look at the image below 




Here, we have to separate one audio signal into 3 different pure signals, that can easily represent as three unique values in a frequency domain. Also, there are present few more ways in which we can represent audio data and its audio analysis. For example. using MFCs. These are nothing but different ways to represent the data.

Further, we have to extract features from this audio representations. This algorithm works on these features and performs the task it is designed for. Here’s a visual representation of the categories of audio features that can be extracted. After extracting, we have to send this to the machine learning model for further analysis.

 Applications of Audio Processing

  • Indexing music collections according to their audio features.
  • Recommending music for radio channels
  • Similarity search for audio files (aka Shazam)
  • Speech processing and synthesis – generating artificial voice for conversational agents

No comments:

Post a Comment

Popular Posts