Thursday, August 25, 2016

MMC2 Linux Device Tree Configuration For SD CARD on ARM PART ii





What has changed since earlier kernel releases?

  • There has been a recent changes (end of last year) to the dt bindings for eDMA3.  Older kernels used DEPRECATED binding for DTS files.  The old bindings were ti,edma3-tpcc/ti,edma3-tptc.  And to get mmc2 (labeled mmc3 in dts files) to work properly you must use the ti,edma-xbar-event-map property for edma:


&mmc3 {
      vmmc-supply = <&vmmcsd_fixed>;
      ti,dual-volt;
      ti,needs-special-reset;
      ti,needs-special-hs-handling;
      pinctrl-names = "default";
      pinctrl-0 = <&mmc3_pins>;
      cd-gpios = <&gpio0 31 GPIO_ACTIVE_HIGH>;
      cd-inverted;
      bus-width = <4>;
      max-frequency = <25000000>;
      dmas = <&edma 12
              &edma 13>;
      dma-names = "tx", "rx";
      status = "okay";
};

&edma {
 ti,edma-xbar-event-map = /bits/ 16 <1 12 
                                     2 13>;

};


What are the new dt bindings for eDMA3?

  • "ti,edma3-tpcc" for the channel controller(s)
  • "ti,edma3-tptc" for the transfer controller(s)
  • "ti,am335x-edma-crossbar" for Crossbar event to channel map
The changes to mmc3 configuration is in bold below:

&mmc3 {
      vmmc-supply = <&vmmcsd_fixed>;
      ti,dual-volt;
      ti,needs-special-reset;
      ti,needs-special-hs-handling;
      pinctrl-names = "default";
      pinctrl-0 = <&mmc3_pins>;
      cd-gpios = <&gpio3 16 GPIO_ACTIVE_LOW>;
      bus-width = <4>;
      max-frequency = <25000000>;

     dmas = <&edma_xbar 12 0 1
             &edma_xbar 13 0 2>;

     #address-cells = <1>;
     #size-cells = <0>;

     dma-names = "tx", "rx";
     status = "okay";
};

  • This is assuming that the pinmux configuration for mmc3 (&mmc3_pins) is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
mmc3_pins: pinmux_mmc3_pins { 

    pinctrl-single,pins = < 
    
      /* gpmc_a1.mmc2_dat0, INPUT_PULLUP | MODE3 */
       0x44 (PIN_INPUT_PULLUP | MUX_MODE3)
      
      /* gpmc_a2.mmc2_dat1, INPUT_PULLUP | MODE3 */ 
       0x48 (PIN_INPUT_PULLUP | MUX_MODE3) 

      /* gpmc_a3.mmc2_dat2, INPUT_PULLUP | MODE3 */  
       0x4C (PIN_INPUT_PULLUP | MUX_MODE3) 
 
      /* gpmc_ben1.mmc2_dat3, INPUT_PULLUP | MODE3 */
       0x78 (PIN_INPUT_PULLUP | MUX_MODE3)

      /* gpmc_csn3.mmc2_cmd, INPUT_PULLUP | MODE3 */ 
       0x88 (PIN_INPUT_PULLUP | MUX_MODE3)
  
      /* gpmc_clk.mmc2_clk, INPUT_PULLUP | MODE3 */  
      0x8C (PIN_INPUT_PULLUP | MUX_MODE3) 

      /* gpmc_a0.gpio1_16 */ 
      0x40 (PIN_OUTPUT_PULLDOWN | MUX_MODE7)
 
      /* mmc2_sdcd, p9_13, Note: Dont know why but we set card 
         detect pinout to be GPIO */  
      0x74 (PIN_INPUT_PULLDOWN | MUX_MODE7)

      /* mmc2_sdwp, p9_17, Note: Write protect is not configured
         in the device tree settings*/ 
      0x15c (PIN_INPUT_PULLDOWN | MUX_MODE1) 

    >; 

};

Friday, July 1, 2016

MMC2 Linux Device Tree Configuration For SD CARD on ARM PART i

Update: 


  • This blog is useful for ARM microprocessors running Linux Kernel versions 4.1.2-ti-r4 to 4.4.0.
  • It could still be relevant for earlier kernel releases but earlier releases have not been tested.
  • If your ARM microprocessor is running kernel's 4.4.16-ti-rt and newer go to part II:

Interfacing a second SD card reader to the Beaglebone Black 

  • I could not find any tutorials or guides in forums on how to interface another SD card to the beaglebone black, so I thought I'd share and show you how I got mine up and running.  I won't explain the device tree bindings in detail but you can use my solution as a reference.


MMC2 PINMUX CONFIGURATION:


Added in file: am335x-bone-common.dtsi

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
mmc3_pins: pinmux_mmc3_pins { 

    pinctrl-single,pins = < 

       0x44 (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_a1.mmc2_dat0, INPUT_PULLUP | MODE3 */

         0x48 (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_a2.mmc2_dat1, INPUT_PULLUP | MODE3 */ 

         0x4C (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_a3.mmc2_dat2, INPUT_PULLUP | MODE3 */ 

         0x78 (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_ben1.mmc2_dat3, INPUT_PULLUP | MODE3 */

         0x88 (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_csn3.mmc2_cmd, INPUT_PULLUP | MODE3 */ 

         0x8C (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_clk.mmc2_clk, INPUT_PULLUP | MODE3 */ 

         0x40 (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_a0.gpio1_16 */ 

         0x74 (PIN_INPUT_PULLDOWN | MUX_MODE7) /* mmc2_sdcd, p9_13, Note: Dont know why but we set card detect pinout to be GPIO */ 

         0x15c (PIN_INPUT_PULLDOWN | MUX_MODE1) /* mmc2_sdwp, p9_17, Note: Write protect is not configured in the device tree settings*/ 

    >; 

};
 

  • Note that the mmc0, mmc1, and mmc2 lines on the beaglebone refer to mmc1, mmc2, and mmc3 in the device tree.


1
2
3
4
         0x88 (PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_csn3.mmc2_cmd, INPUT_PULLUP | MODE3 */ 
                    ...
                    ...
         0x40 (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_a0.gpio1_16 */ 

  • You can see that the two lines are connected:





MMC2 DEVICE TREE BINDINGS:




Added in file: am335x-bone-common.dtsi


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
&mmc3 {
      vmmc-supply = <&vmmcsd_fixed>;
      ti,dual-volt;
      ti,needs-special-reset;
      ti,needs-special-hs-handling;
      pinctrl-names = "default";
      pinctrl-0 = <&mmc3_pins>;
      cd-gpios = <&gpio0 31 GPIO_ACTIVE_HIGH>;
      cd-inverted;
      bus-width = <4>;
      max-frequency = <25000000>;
      dmas = <&edma 12
              &edma 13>;
      dma-names = "tx", "rx";
      status = "okay";
};

&edma {
 ti,edma-xbar-event-map = /bits/ 16 <1 12 
                                     2 13>;

};
 


    Tuesday, June 14, 2016

    Howto: Design and Code a Music Visualizer

    Just here for code? Look no further.

    What is a Music Visualizer?

    A generation of visuals based on the music. demo


    How to Implement a Music Visualizer?


    1. Processing the audio file and run a Fourier transformation on audio data to get information about the original sound wave (amplitude and frequency)
    2. Store this data 
    3. Output a visual based on the stored data when music is played

    Things to Think About Before Coding

    • How to play the sound?
    • How to implement Fourier transformation?
    • How to interpret information from the Fourier transformation?
    • How to sync visual with music?
    • What does the data in an audio file represent? 

    How I Implemented my Music Visualization Software

    I wrote my visualization software in c and used the SDL2 sound API to play an audio WAV file.  To compute the Fourier Transformation I used FFTW, a C routine library known for efficiently computing Discrete Fourier (DFT) Transformations.  My visuals (power spectrum from selected frequencies) is outputted to the Linux Terminal.



    Using DFT Results to Calculate Sound Frequency

    Calculating the frequencies from the DFT is a bit tricky.  The DFT results are from adding a bunch of waves at a specific frequency k. k will be from 0Hz to N-1Hz, where N is the number of samples (frames). Adding the waves acts as a filter  (read up on constructive and deconstructive interference of waves). The DFT returns the amount of frequency k  in the signal (amplitude and phase) which is represented in complex form i.e. real and imaginary values.

    Now to calculate the sound frequency from DFT we need to use the sampling rate value:


    freq = i * Fs / N;      (1)

    where,

    freq = frequency in Hertz,

    i = index (position of DFT output or can also think of it as representing the number of cycles)

    Fs = sampling rate of audio,

    N = size of FFT buffer or array.



    To explain further, lets say that:



    N = 2048          //a buffer that holds 2048 audio data samples (frames)

    Fs = 44100       //a common sample rate [frames per sec] for audio signals: 44.1 kHz



    The spectral bin numbers aka frequency bins using equation (1) from above would be:



        bin:      i      Fs         N            freq

         0  :     0  *  44100 /  2048  =        0.0 Hz

         1  :     1  *  44100 /  2048  =        21.5 Hz

         2  :     2  *  44100 /  2048  =        43 Hz

         3  :     3  *  44100 /  2048  =        64.5 Hz

         4  :     ...

         5  :     ...



       1024 :    1024 * 44100 /  2048  =        22.05 kHz   

    --
    Note that the useful index range for frequencies is from (1 to N/2). The 0th bin represents "DC"  and the n/2-th represents the "Nyquist" frequency. Frequencies larger than the Nyquist frequency is redundant data.

    Also note that the magnitude is needed to create power spectrum .

    Finding Peak Magnitude and Using it to Find the Peak Frequency

    For our visual we need to distinguish which frequency (out of N-1 frequencies) has the strongest power (peak magnitude).   So we'll need to find the position of this peak magnitude and find the peak frequency.

    Now to find the magnitude we need to use the results from the DFT.  The DFT will give us the real (re) and imaginary (im) values so we can treat these values as a coordinate system and  will use the Pythagorean theorem equation to find the magnitude (mag):

    re^2 + im^2 = mag^2;       so,
    mag = sqrt(re*re + im*im)

    To find the peak frequency of all 2048 frame samples we will need to find the index where the magnitude is the largest. Then substitute that index for "i" in the frequency equation (1).   The pseudo code algorithm would look like:


    // copy real input data to complex FFT buffer
    for i = 0 to N - 1
        fft[2*i] = data[i]
        fft[2*i+1] = 0
    perform in-place complex-to-complex FFT on fft[] buffer

    // calculate power spectrum (magnitude) values from fft[]
    for i = 0 to N / 2 - 1
        re = fft[2*i]
        im = fft[2*i+1]
        magnitude[i] = sqrt(re*re+im*im)

    // find largest peak in power spectrum
    max_magnitude = -INF
    max_index = -1
    for i = 0 to N / 2 - 1
        if magnitude[i] > max_magnitude
            max_magnitude = magnitude[i]
            max_index = i

    // convert index of largest peak to frequency
    freq = max_index * Fs / N

    --
    Instead of only calculating a single peak frequency based on the peak magnitude over N (2048) sample frames, I calculated multiple peak frequencies and peak magnitudes for the following frequency ranges also:

    • 20 to 140:  Bass range
    • 140 to 400:  Mid-Bass range
    • 400 to 2600:  Midrange
    • 2600 to 5200:  Upper Midrange
    • 5200 to Nyquist:  High end

     
    The C implementation would look like:

    double max[5] = {
                    1.7E-308,
                    1.7E-308,
                    1.7E-308,
                    1.7E-308,
                    1.7E-308
            };
    
            double re, im;
            double peakmax = 1.7E-308 ;
            int max_index = -1;
    
    
            for (int m=0 ; m< F/2; m++){
                re = fftw.out[m][0];
                im = fftw.out[m][1];
            
                fftw.magnitude[m] = sqrt(re*re+im*im);
              
                float freq = m * (float)wavSpec.freq / F;
    
                if(freq > 19 && freq<= 140){
                    if(fftw.magnitude[m] > max[0]){
                        max[0] = fftw.magnitude[m];
                    }
                }
                else if(freq > 140 && freq<= 400){
                    if(fftw.magnitude[m] > max[1]){
                        max[1] = fftw.magnitude[m];
                    }
                }
                else if(freq >400 && freq<= 2600){
                    if(fftw.magnitude[m] > max[2]){
                        max[2] = fftw.magnitude[m];
                    }
                }
                else if(freq > 2600 && freq<= 5200){
                    if(fftw.magnitude[m] > max[3]){
                        max[3] = fftw.magnitude[m];
                    }
                }
                else if(freq > 5200 && freq<= audio.SamplesFrequency/2){
                    if(fftw.magnitude[m] > max[4]){
                        max[4] = fftw.magnitude[m];
                    }
                }
                if(fftw.magnitude[m] > peakmax){
                    peakmax = fftw.magnitude[m];
                    max_index = m;
                }
            }//end for

    --
    To simplify the code, we can store the frequency ranges into an array and just process that array:  

       double freq_bin[] = {19.0, 140.0, 400.0, 2600.0, 5200.0, nyquist };
    
            for(int j = 0; j < frames/2; ++j){
    
              re =  fftw.out[j][0];
              im =  fftw.out[j][1];
          
              magnitude = sqrt(re*re+im*im);
    
             double freq = j * (double)wavSpec.freq / frames;
    
             for (int i = 0; i < BUCKETS; ++i){
               if((freq>freq_bin[i]) && (freq <=freq_bin[i+1])){
                 if (magnitude > peakmaxArray[i]){
                   peakmaxArray[i] = magnitude;
                 }
               }
             }
    
             if(magnitude > peakmax){ 
                  peakmax = magnitude;
                  max_index = j;
             }
      
     }
    --
    We now have frequency and power information of the original sound wave and can store this data into another array which will later be accessed to create our visual.

    This algorithm analyzes at most 2048 sample frames at a time, for this specific example. Run this algorithm "n" times until all waveform data in audio file is processed. I'll leave it up to you to find out the value of "n". Hint: requires knowing the size of audio data and other useful information about the sound in a wav file. So read up on wav audio files.

    Lastly,
    We can create a visual in the form of a magnitude vs frequency 2d graph or any 3d representation, like a sphere, while the music is playing!

    But how do we sync the visuals with the music?

    Well that's easy, we can utilize the sound API's, in my case SDL2, features.  SDL uses a callback function to refill the buffer with audio data whenever it's about to be empty.  The buffer has to be filled in order to continue playing the music. So whenever the callback function is called just output the correct visual.

    And that's it!
    You should now be capable of implementing a music visualizer.

    Happy coding


    Other peoples work worth mentioning:


    Useful readings:

       

    Labels: , , , , , , , ,