Flower Recognition and Accuracy in Convolutional Neural Networks


Convolutional Neural Network (CNN) is a class of artificial neural networks in deep learning. PlantNet Plant Identification is usually applied to explore visual imagery. Most convolutional neural networks are opposed to invariant, to translation. They have presentations;

  • In image and video recognition
  • Recommender systems
  • Image classification
  • Image segmentation
  • Medical image analysis
  • Natural language processing
  • Brain-computer interfaces
  • Financial time series

CNN’s are standardized with multilayer perceptrons. Those are fully connected networks. Every neuron in one layer is linked to all neurons in the next layer. That full connectivity makes them inclined to overfit data.


In this post, we will try to introduce convolutional neural networks to proficiently identify a flower by fair feeding an image of the flower to be recognized. Flowers are one of the lovely creations of nature. They are present in millions of changed species and colors. Identifying each of them needs a botanist with vast knowledge and skills.

In this era of enduring and emergent technologies record of the impossible is made possible by integrating artificial intelligence into real-world problems. By bringing together machine learning algorithms for example convolutional neural networks for classifying flower species with only an image will be inordinate support for industries like pharmaceuticals and cosmetics.


  • CNN models are trained by primarily feeding a set of flower images alongside their labels.
  • These images are at that time passed over a stack of layers with convolutional, ReLU, pooling, and fully connected layers.
  • These images are in use as batches.

Recommended system included;

  • A batch size of 32 was known.
  • The model was trained using 150 epochs.
  • Firstly the model excerpts small features.
  • As the training process grows more thorough features will be extracted.
  • Best of the preprocessing is completed automatically. That is one of the main benefits of CNN.
  • Furthermore, the input images were resized.
  • An increase is likewise implemented that rises the size of the dataset by applying operations for example rotation, shear, etc.
  • The model realizes features and patterns and learns them during the training process.
  • This information is then used to far along find the name of a flower when a new flower image is set as input.
  • Definite cross-entropy is used as a loss function.
  • At first, the loss values would be very high. Then as the process progresses the loss function is reduced by modifying the weight values.
  • The CSV file is imported and the most important uses of that plant would be displayed once the classification is done.
  • The model was deployed into a web application to increase user-friendliness.


The recommended system was applied as follows:

Stage 1: Image attainment:

  • This stage includes gathering images. Those images can be used to train the model.
  • It can classify the flower centered on the knowledge learned during the training period of the system.

Stage 2: Image Pre-handling:

  • In this stage, the images together in the former stage were resized and enlarged to increase the efficiency of the model.
  • The size of the dataset would be augmented by doing operations during augmentation, for example, rotation, shear, etc.
  • At that time the image would be divided into 75 percent training and 25 percent testing sets.

Stage 3: Training Level:

  • In this stage the real training of the model takes place.
  • During this period the model extracts features for instance color and shape of the flower used for training.
  • Each of the training images would be passed over a stack of layers.
  • Those layers contain the convolutional layer, Relu layer, pooling layer, and fully connected layer.

Stage 4: Validation segment:

  • When the model finishes its training from the training set, it attempts to develop that one by tuning its weight values.
  • The loss function used is definite cross-entropy.
  • The optimizer used is stochastic gradient background.

Stage 5: Output calculation:

  • The model is prepared to take an unknown image of a flower once the validation phase is over.
  • It calculates its name from the knowledge it gained all through training and validation levels.
  • It shows the common name along with the family name of that flower once the classification is completed by the model.

Stage 6: Benefits Module:

  • A formerly created CSV file is imported when the identity of the flower is set up.
  • The benefits of the equivalent flower will be found out and shown to the user.

Stage 7: Web Application:

  • To conclude the established model was deployed into a web application.
  • That application additional makes the system more user-friendly.

Flower Recognition and Accuracy in Convolutional Neural Networks


Flower Recognition and Accuracy in Convolutional Neural Networks

  • Dataset is used for training the CNN model is a subset of Oxford 102 flowers.
  • The unique dataset consists of 102 classes with 40 to 200 images of each flower.
  • A subclass of 24 flowers out of this with 150 images of each class is used to train the model.
  • To implement unbiased training equal number of images of each class are provided.
  • The model was trained through batch size 32 and by 150 epochs.
  • The sorting report attained after the training and validation period is shown in figure-2.
  • The graph designs the training loss, validation loss, training accuracy, and validation accuracy for each epoch.


Flower Recognition and Accuracy in Convolutional Neural Networks

  • The model accomplished an overall accuracy of 90%.
  • A right prediction with 98.46% accuracy was achieved when the model is provided with a real-time image of hibiscus taken on a mobile camera.

Mansoor Ahmed

Mansoor Ahmed is Chemical Engineer, web developer, a Tech writer currently living in Pakistan. My interests range from technology to web development. I am also interested in programming, writing, and reading.