Using OpenCV in Xcode to paint the wall in an image

Learn about iOS Camera, OpenCV, and storing images in this Xcode Tutorial.

Tushar Gusain
10 min readOct 11, 2020

Version:-

Xcode 11, swift 5, iOS 13

This is a post by Tushar Gusain, an Android and iOS developer.

A good paint job beautifies and adds character and personality to your home. Just for that reason it’s important to take sufficient time planning which colors, shades, and paint styles you want to decorate your home with.

Whenever you buy a new house, whitewash your home, or just want to repaint it you are stuck at deciding which color will suit best for the walls. Making an app that’ll help you visualize those walls in different colours will come in handy.

We can do this by taking a photo of one or more walls with your phone and clicking the wall to paint them in the chosen colour or texture. Painting the wall, i.e. image processing part can be done with the help of OpenCV library.

Keep reading to know how simple it is to integrate OpenCV in your app.

About OpenCV

OpenCV (Open Source Computer Vision Library) is a library of programming functions mainly aimed at real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage then Itseez (which was later acquired by Intel).

It is an open source library with quite a number of useful Algorithms and API which can be used for image processing. It is mainly used to do all the operation related to Images, like:

1. Reading and Writing Images.

2. Detecting faces and its features.

3. Detecting shapes in an image.

4. Text recognition.

5. Modifying image quality and colors.

6. Developing Augmented reality apps.

Etc…..

OpenCV supports a wide variety of programming languages such as C++, Python, Java, etc., and is available on different platforms including Windows, Linux, OS X, Android, and iOS. Interfaces for high-speed GPU operations based on CUDA and OpenCL are also under active development.

( To know more about opencv follow this link: https://opencv.org/about/ )

Creating your Xcode project

Open Xcode, click on New project, then select Single View App and click Next.

Now, Enter the name of your App in the Product Name field, then, select Swift as Language and Storyboard as User Interface.

After that click on Next.

Open Xcode, click on New project, then select Single View App and click Next.

Now, Enter the name of your App in the Product Name field, then, select Swift as Language and Storyboard as User Interface.

After that click on Next.

Integrating the OpenCV module

Step 1: Downloading OpenCV

Download the OpenCV library from here: https://opencv.org/releases/
For this project I am using OpenCV — 3.4.7, after downloading unzip the folder.

Step 2: Importing OpenCV

Once downloaded let’s import it into our Paint App Storyboard app target.

Drag and drop opencv2.framework into the project.

Our selection of the adding opencv2.framework to the project options will copy opencv2.framework into our project and link the framework to our app.

You should find opencv2.framework in Frameworks, Libraries and Embedded Content under General tab for the Paint App Storyboard target configuration.

Building the User Interface

Step 1: Add a Toolbar.

Go ahead and add a Toolbar to the bottom of your ViewController.

After that add the following bar button items as shown below.

Step 2: Add an ImageView.

Now, add an ImageView to your Viewcontroller.

Step 4: Hookup your User Interface.

Change the name of your ViewController to PaintViewController.

a) Hookup your Outlets (ImageView).

Add the below IBOutlet to the PaintViewController and hook it with the imageView.

//MARK:- ViewController Outlets@IBOutlet var imageView: UIImageView!

b) Add Property Variables

Add the below local variables to the PaintViewController.

//MARK:- ViewController state variablesprivate var touchX = 0.0private var touchY = 0.0private var screenSize = CGSize.zeroprivate var cameraToggled = falseprivate var applyPaint = trueprivate var currentColor = UIColor.redprivate var imagePickerVC: UIImagePickerController {let vc = UIImagePickerController()vc.delegate = selfvc.allowsEditing = truereturn vc}

c) Hookup your Action Methods (Toolbar Buttons).

Add the below lines of code to the PaintViewController and hook them with their respective bar buttons.

//MARK:- ViewControllers Action methods@IBAction func takePhoto(_ sender: UIBarButtonItem) {  cameraToggled = true  getPhoto()}@IBAction func openGallery(_ sender: UIBarButtonItem) {  cameraToggled = false  getPhoto()}@IBAction func toggleTexture(_ sender: UIBarButtonItem) {  applyPaint = false}@IBAction func toggleColor(_ sender: UIBarButtonItem) {  applyPaint = true}@IBAction func chooseColor(_ sender: UIBarButtonItem) {  colorPicker.isHidden = false}

Insert Wall Painter Algorithm

Step 1: Add C++ header and implementation files

From menu click on File > New > File… Next search and select for C++ File template.

Click Next and name it WallPainter and check Also create header file.

Finally select your app target folder (PaintApp Storyboard in my case) then click Next and Create. Xcode will then prompt you with some options to configure the app to use multiple languages. Click on Create Bridging Header option.

The bridging header file is important as it will allow us to consume our Wall Painter algorithm by allowing different languages to talk to each other.

Open WallPainter.hpp, copy and paste the code below:

#ifndef WallPainter_hpp#define WallPainter_hpp#endif /* WallPainter_hpp */#include <opencv2/opencv.hpp>class WallPainter {public:/*Returns image with paint overlay*/  cv::Mat paint_wall(cv::Mat image,cv::Point p, cv::Size imageSize,cv::Scalar chosenColor);  cv::Mat apply_texture(cv::Mat image,cv::Mat texture,cv::Point p, cv::Size imageSize);};

Step 2: Add apply paint and Texture implementations.

Open WallPainter.cpp, add the below methods:

a) Paint Wall Method

Painting the wall consist of the following steps:

  1. Get the Greyscale and HSV matrix for the given image
  2. Get the S-channel matrix from the HSV image
  3. Blur the image matrices
  4. Do Canny Edge detection on both of them
  5. Now, merge(linear/alpha blend) both the canny edge detected images
  6. Dilate the resulting image matrix
  7. Floodfill your original RGB image by taking the above resulting image as a mask and “tl(variable that stores the touch coordinates)” as your seed point with your chosen color
  8. Now, again take the HSV matrix of your original image and merge the V-channel matrix into this flood-filled image
  9. Finally add merge the resulting image with your original image.
  10. Show your image to the user.

Copy and paste the below code:

#include “WallPainter.hpp”using namespace cv;using namespace std;Mat WallPainter::paint_wall(Mat image,Point p, cv::Size imageSize, cv::Scalar chosenColor) {  double cannyMinThres = 30.0;  double ratio = 2.5;  Mat mRgbMat = image;  cvtColor(mRgbMat,mRgbMat,COLOR_RGBA2RGB);  Mat mask = Mat(Size(mRgbMat.cols/8.0, mRgbMat.rows/8.0), CV_8UC1, Scalar(0.0));  Mat img ;  mRgbMat.copyTo(img);  Mat mGreyScaleMat;  cvtColor(mRgbMat, mGreyScaleMat,COLOR_RGB2GRAY, 3);  medianBlur(mGreyScaleMat,mGreyScaleMat,3);  Mat cannyGreyMat;  Canny(mGreyScaleMat, cannyGreyMat, cannyMinThres, cannyMinThres*ratio, 3);  //hsv  Mat hsvImage;  cvtColor(img,hsvImage,COLOR_RGB2HSV);  //got the hsv values  vector<Mat> list = vector<Mat>(3);  split(hsvImage, list);  Mat sChannelMat;  vector<Mat> sList = vector<Mat>{list[1]};  // sList.push_back(list[1]);  merge(sList, sChannelMat);  medianBlur(sChannelMat,sChannelMat,3);  // canny  Mat cannyMat;  Canny(sChannelMat, cannyMat, cannyMinThres, cannyMinThres*ratio, 3);  addWeighted(cannyMat,0.5, cannyGreyMat,0.5 ,0.0,cannyMat);  dilate(cannyMat, cannyMat,mask, Point(0.0,0.0), 5);  cout<<mRgbMat.cols<<”,”<<mRgbMat.rows<<”\n”;  double width = imageSize.width;  double height = imageSize.height;  Point seedPoint = Point(p.x*(double(mRgbMat.cols)/width), p.y*(double(mRgbMat.rows)/height));cout<<seedPoint.x<<”,”<<seedPoint.y<<”\n”;// Make sure to resize the cannyMat or it'll throw an error  resize(cannyMat, cannyMat, Size(cannyMat.cols + 2.0, cannyMat.rows + 2.0));  medianBlur(mRgbMat,mRgbMat,15);  int floodFillFlag = 8;  floodFill(mRgbMat,cannyMat,seedPoint,chosenColor,0,Scalar(5.0, 5.0, 5.0),Scalar(5.0, 5.0, 5.0),floodFillFlag);  dilate(mRgbMat, mRgbMat, mask, Point(0.0,0.0), 5);  //got the hsv of the mask image  Mat rgbHsvImage;  cvtColor(mRgbMat,rgbHsvImage,COLOR_RGB2HSV);  vector<Mat> list1 = vector<Mat>(3);  split(rgbHsvImage, list1);  //merged the “v” of original image with mRgb mat  Mat result;  vector<Mat> newList = vector<Mat>();  newList.push_back(list1[0]);  newList.push_back(list1[1]);  newList.push_back(list[2]);  merge(newList, result);  // converted to rgb  cvtColor(result, result, COLOR_HSV2RGB);  addWeighted(result,0.7, img,0.3 ,0.0,result);  return result;}

b) Apply Texture Method

Applying texture is similar to applying paint to your image:

  1. Follow Step 1 to 6 (given above) for your image.
  2. For the floodfill part:

a) wall-image: darken the area of the wall by taking the canny edge detected image as the mask.

b) texture image: brighten the area of the wall by taking the canny edge detected image as the mask.

3. Merge the two image matrices by using Bitwise_or operator on the two matrices.

4. Finally add the V-Channel of the original image and show it to the user.

Copy and paste the below code:

Mat WallPainter::apply_texture(Mat image, Mat texture, Point p, cv::Size imageSize) {  double cannyMinThres = 30.0;  double ratio = 2.5;  Mat mRgbMat = image;  cvtColor(mRgbMat,mRgbMat,COLOR_RGBA2RGB);  Mat mask = Mat(Size(mRgbMat.cols/8.0, mRgbMat.rows/8.0),CV_8UC1, Scalar(0.0));  //Imgproc.dilate(mRgbMat, mRgbMat,mask, Point(0.0,0.0), 5)  Mat img;  mRgbMat.copyTo(img);  //grayscale  Mat mGreyScaleMat;  cvtColor(mRgbMat, mGreyScaleMat,COLOR_RGB2GRAY, 3);  medianBlur(mGreyScaleMat,mGreyScaleMat,3);  Mat cannyGreyMat;  Canny(mGreyScaleMat, cannyGreyMat, cannyMinThres,   cannyMinThres*ratio, 3);  //hsv  Mat hsvImage;  cvtColor(img,hsvImage,COLOR_RGB2HSV);  //got the hsv values  vector<Mat> list = vector<Mat>(3);  split(hsvImage, list);  Mat sChannelMat;  vector<Mat> sList = vector<Mat>{list[1]};  merge(sList, sChannelMat);  medianBlur(sChannelMat,sChannelMat,3);  //canny  Mat cannyMat;  Canny(sChannelMat, cannyMat, cannyMinThres, cannyMinThres*ratio, 3);  addWeighted(cannyMat,0.5, cannyGreyMat,0.5 ,0.0,cannyMat);  dilate(cannyMat, cannyMat,mask, Point(0.0,0.0), 5);  double width = imageSize.width;  double height = imageSize.height;  Point seedPoint = Point(p.x*(double(mRgbMat.cols)/width), p.y*(double(mRgbMat.rows)/height));// Make sure to resize the cannyMat or it'll throw an error  resize(cannyMat, cannyMat, Size(cannyMat.cols + 2.0, cannyMat.rows + 2.0));  Mat cannyMat1;  cannyMat.copyTo(cannyMat1);  Mat wallMask = Mat(mRgbMat.size(),mRgbMat.type());  int floodFillFlag = 8;  floodFill(wallMask,cannyMat,seedPoint,Scalar(255.0,255.0,255.0),0,Scalar(5.0, 5.0, 5.0),Scalar(5.0, 5.0, 5.0),floodFillFlag);  //second floodfill is not working 5  floodFill(mRgbMat,   cannyMat1,seedPoint,Scalar(0.0,0.0,0.0),0,Scalar(5.0, 5.0, 5.0),Scalar(5.0, 5.0, 5.0),floodFillFlag);  Mat textureImgMat;  cvtColor(texture,textureImgMat,COLOR_RGBA2RGB);  resize(textureImgMat, textureImgMat, Size(mRgbMat.cols, mRgbMat.rows));  bitwise_and(wallMask ,textureImgMat,textureImgMat);  Mat resultImage;  bitwise_or(textureImgMat,mRgbMat,resultImage);  ////alpha blending  //got the hsv of the mask image  Mat rgbHsvImage;  cvtColor(resultImage,rgbHsvImage,COLOR_RGB2HSV);  vector<Mat> list1 = vector<Mat>(3);  split(rgbHsvImage, list1);  //merged the “v” of original image with mRgb mat  Mat result;  vector<Mat> newList = vector<Mat>();  newList.push_back(list1[0]);  newList.push_back(list1[1]);  newList.push_back(list[2]);  merge(newList, result);  //converted to rgb  cvtColor(result, result, COLOR_HSV2RGB);  addWeighted(result,0.8, img,0.2 ,0.0,result);  return result;}

Step 3: Consume Wall Painter algorithm using Swift

Now, we need to consume the Wall Painter Algorithm code.

Our Swift code is not yet able to consume C++ code. However Objective-C is. Furthermore we can consume Objective-C code through Swift. To do that, let’s create Objective-C code to bridge between Swift and C++.

Add a new header file to the project. Select File > New > File… and then select Header file from the iOS template, name it WallPainterBridge(select your app target as the folder). Copy and paste the code below in WallPainterBridge.h:

#import <Foundation/Foundation.h>#import <UIKit/UIKit.h>@interface WallPainterBridge : NSObject- (UIImage *) paintWallOf: (UIImage *) image touchPointX: (double) pointX touchPointY: (double) pointY imageWidth: (double) width imageHeight: (double) height colorToapply: (UIColor *) color;- (UIImage *) applyTextureTo: (UIImage *) image textureToApply: (UIImage *) texture touchPointX: (double) pointX touchPointY: (double) pointY imageWidth: (double) width imageHeight: (double) height;@end

Next create an Objective-C file that will implement the WallPainterBridge interface. Select File > New > File.. and then select Objective-C File from iOS template. Name it WallPainterBridge (select your app target as the folder).

Edit the file name of the recently created WallPainterBridge.m and add an extra “m”.

The extra m will tell Xcode that this is an Objective-C++ file. WallPainterBridge is now allowed to use C++ from within.

Copy and paste the code below to WallPainterBridge.mm:

#import <opencv2/opencv.hpp>#import <opencv2/imgcodecs/ios.h>#import <Foundation/Foundation.h>#import “WallPainterBridge.h”#include “WallPainter.hpp”@implementation WallPainterBridge- (UIImage *) paintWallOf:(UIImage *)image touchPointX:(double)pointX touchPointY:(double)pointY imageWidth:(double)width imageHeight:(double)height colorToapply:(UIColor*)color {  //convert uiimage to mat  cv::Mat opencvImage;  UIImageToMat(image, opencvImage, true);  //Run lane detection  WallPainter wallPainter;  cv::Point p = cv::Point(pointX,pointY);  cv::Size imageSize = cv::Size(width,height);  CGFloat red = 0;  CGFloat green = 0;  CGFloat blue = 0;  CGFloat alpha = 0;  [color getRed:&red green:&green blue:&blue alpha:&alpha];  cv::Scalar chosenColor = cv::Scalar(red,green,blue,alpha);  cv::Mat imageWithWallPainted =   wallPainter.paint_wall(opencvImage,p,imageSize,chosenColor);  //convert mat to uiimage and return it to the caller  return MatToUIImage(imageWithWallPainted);}- (UIImage *) applyTextureTo:(UIImage *)image textureToApply:(UIImage *)texture touchPointX:(double)pointX touchPointY:(double)pointY imageWidth:(double)width imageHeight:(double)height {  cv::Mat opencvImage;  UIImageToMat(image, opencvImage, true);  cv::Mat opencvTexture;  UIImageToMat(texture, opencvTexture, true);  //Run lane detection  WallPainter wallPainter;  cv::Point p = cv::Point(pointX,pointY);  cv::Size imageSize = cv::Size(width,height);  cv::Mat imageWithWallTextured =   wallPainter.apply_texture(opencvImage, opencvTexture, p, imageSize);  //convert mat to uiimage and return it to the caller  return MatToUIImage(imageWithWallTextured);}@end

Open (Your App name)-Bridging-Header.h and add the following line:

#import “WallPainterBridge.h”

Coding your ViewController

Step 1: Add a ColorPicker

Now, we are gonna add ChromaColorPicker in our project.
(Here is the github repo for ChromaColorPicker:- https://github.com/joncardasis/ChromaColorPicker)

Create a new folder inside Your app target folder and name it Chroma Color Picker.

Inside the folder add the following files from the ChromaColorPicker repository.

Now, go inside your PaintViewController and add a new colorPicker variable:

var colorPicker: ChromaColorPicker!

Then add the chroma color picker delegate method:

//MARK:- ChromaColorPicker delegate methodsextension PaintViewController: ChromaColorPickerDelegate {  func colorPickerDidChooseColor(_ colorPicker: ChromaColorPicker, color: UIColor) {    currentColor = color    colorPicker.hexLabel.textColor = currentColor    colorPicker.isHidden = true  }}

Step 2: Add Image Picker Delegate

Copy and paste the below lines of code:

//MARK:- ImagePicker delegate methodsextension PaintViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {  func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {    picker.dismiss(animated: true, completion: nil)    guard let image = info[.editedImage] as? UIImage else {    print(“no image found”)    return } print(image) imageView.image = image }}

Step 3: Add get photo method

Copy and paste the below lines of code inside PaintViewController:

//MARK:- Custom Methodsprivate func getPhoto() {  if cameraToggled {    imagePickerVC.sourceType = .camera  } else {    imagePickerVC.sourceType = .photoLibrary  }  present(imagePickerVC, animated: true)}

Step 4: Code the onCreate method

Copy and paste the below lines of code inside the onCreate method:

//MARK:- Lifecycle Hooksoverride func viewDidLoad() {  super.viewDidLoad()  print(“success”)  colorPicker = ChromaColorPicker(frame: CGRect(x: view.frame.width/2–140, y: view.frame.height/2–270, width: 300, height: 300))  colorPicker.delegate = self //ChromaColorPickerDelegate  colorPicker.padding = 5  colorPicker.stroke = 3  colorPicker.currentColor = currentColor  colorPicker.hexLabel.textColor = currentColor  colorPicker.isHidden = true  view.addSubview(colorPicker)  getPhoto()}

Final result

Now, after getting the image and tapping on the wall inside the image the wall should be painted red (red is the default color, of course you can choose other colours and your texture from the Toolbar menu).

Goodbye

Congratulations!!! , you painted the wall inside the image using OpenCV libraries.

And with that my friend, you are done. Great job!

You can find the whole code for this app here : https://github.com/tushar40/Wall-Paint-App-iOS/tree/master/Paint%20app%20StoryboardUI

References:-

https://medium.com/onfido-tech/building-a-simple-lane-detection-ios-app-using-opencv-4f70d8a6e6bc

https://medium.com/@tushargusain40/using-opencv-in-android-studio-to-paint-the-wall-in-an-image-abac3a79c790

--

--