The most common way of communication between humans is the use of speech signals, which also includes the person's emotional states. Bionic wavelet transform entropy has been considered in this study for speaker-independent and context-independent emotion detection from speech. Bionic wavelet Transform decomposition, using wavelet type Morlet, is used after preprocessing and Shannon entropy in its nodes is calculated for feature selection. In addition, prosodic features such as the first four formants, jitter or pitch deviation amplitude, and shimmer or energy variation amplitude besides MFCC features are applied to complete the feature vector. Support Vector Machine (SVM) is used to classify multi-class samples of emotions. 46 different utterances of a single sentence from the Berlin emotional speech dataset are selected to be analyzed. The emotions that have been considered are sadness, happiness, fear, boredom, anger, and normal emotional state. Experimental results show that proposed features can improve emotional state detection accuracy in the multi-class situation.