Blending Example OpenCV Alpha Blending

We are studying and comparing all available blending codes in OpenCV.This is third post in OpenCV tutorial series which compiles image blending codes across the web.

So far we have covered examples of Multiband blending & Feather Blending.

Here we will cover alpha blending.

DISCLAIMER:I do not claim any authority over these codes,I have just compiled them in one place and edited to run on Linux + OpenCV 2.4.9.All authors are credited here and If anyone has complaints about the use of these codes can comment here to bring it to my notice.

For the impatient ones lets directly jump to the code

#include <cv.h> 
#include <highgui.h> 
#include <iostream> 
#include <opencv2/opencv.hpp> 

using namespace cv; 
using namespace std; 

	cv::Mat border(cv::Mat mask) 
			cv::Mat gx; 
			cv::Mat gy; 


			cv::Mat border; 

			return border > 100; 

	cv::Mat computeAlphaBlending(cv::Mat image1, cv::Mat mask1, cv::Mat image2, cv::Mat mask2) 
			// edited: find regions where no mask is set 
			// compute the region where no mask is set at all, to use those color values unblended 
			cv::Mat bothMasks = mask1 | mask2; 
			cv::Mat noMask = 255-bothMasks; 
			// ------------------------------------------ 

			// create an image with equal alpha values: 
			cv::Mat rawAlpha = cv::Mat(noMask.rows, noMask.cols, CV_32FC1); 
			rawAlpha = 1.0f; 

			// invert the border, so that border values are 0 ... this is needed for the distance transform 
			cv::Mat border1 = 255-border(mask1); 
			cv::Mat border2 = 255-border(mask2); 

			// show the immediate results for debugging and verification, should be an image where the border of the face is  				black, rest is white 
			//cv::imshow("b1", border1); 
			//cv::imshow("b2", border2); 

			// compute the distance to the object center 
			cv::Mat dist1; 
			cv::distanceTransform(border1,dist1,CV_DIST_L2, 3); 

			// scale distances to values between 0 and 1 
			double min, max; cv::Point minLoc, maxLoc; 

			// find min/max vals 
			cv::minMaxLoc(dist1,&min,&max, &minLoc, &maxLoc, mask1&(dist1>0));  // edited: find min values > 0 
			dist1 = dist1* 1.0/max; // values between 0 and 1 since min val should alwaysbe 0 

			// same for the 2nd image 
			cv::Mat dist2; 
			cv::distanceTransform(border2,dist2,CV_DIST_L2, 3); 
			cv::minMaxLoc(dist2,&min,&max, &minLoc, &maxLoc, mask2&(dist2>0));  // edited: find min values > 0 
			dist2 = dist2*1.0/max;  // values between 0 and 1 

			//TODO: now, the exact border has value 0 too... to fix that, enter very small values wherever border pixel is set... 

			// mask the distance values to reduce information to masked regions 
			cv::Mat dist1Masked; 
			rawAlpha.copyTo(dist1Masked,noMask);    // edited: where no mask is set, blend with equal values 
			rawAlpha.copyTo(dist1Masked,mask1&(255-mask2)); //edited 

			cv::Mat dist2Masked; 
			rawAlpha.copyTo(dist2Masked,noMask);    // edited: where no mask is set, blend with equal values 
			rawAlpha.copyTo(dist2Masked,mask2&(255-mask1)); //edited 

			//cv::imshow("d1", dist1Masked); 
			//cv::imshow("d2", dist2Masked); 

			// dist1Masked and dist2Masked now hold the "quality" of the pixel of the image, so the higher the value, the more of  				that pixels information should be

kept after blending 
			// problem: these quality weights don't build a linear combination yet 

			// you want a linear combination of both image's pixel values, so at the end you have to divide by the sum of both 				weights 
			cv::Mat blendMaskSum = dist1Masked+dist2Masked; 

			// you have to convert the images to float to multiply with the weight 
			cv::Mat im1Float; 
			//cv::imshow("im1Float", im1Float/255.0); 

			// TODO: you could replace those splitting and merging if you just duplicate the channel of dist1Masked and dist2Masked 
			// the splitting is just used here to use .mul later... which needs same number of channels 
			std::vector<cv::Mat> channels1; 
			// multiply pixel value with the quality weights for image 1 
			cv::Mat im1AlphaB = dist1Masked.mul(channels1[0]); 
			cv::Mat im1AlphaG = dist1Masked.mul(channels1[1]); 
			cv::Mat im1AlphaR = dist1Masked.mul(channels1[2]); 

			std::vector<cv::Mat> alpha1; 
			cv::Mat im1Alpha; 
			//cv::imshow("alpha1", im1Alpha/255.0); 

			cv::Mat im2Float; 

			std::vector<cv::Mat> channels2; 
			// multiply pixel value with the quality weights for image 2 
			cv::Mat im2AlphaB = dist2Masked.mul(channels2[0]); 
			cv::Mat im2AlphaG = dist2Masked.mul(channels2[1]); 
			cv::Mat im2AlphaR = dist2Masked.mul(channels2[2]); 

			std::vector<cv::Mat> alpha2; 
			cv::Mat im2Alpha; 
			//cv::imshow("alpha2", im2Alpha/255.0); 

			// now sum both weighted images and divide by the sum of the weights (linear combination) 
			cv::Mat imBlendedB = (im1AlphaB + im2AlphaB)/blendMaskSum; 
			cv::Mat imBlendedG = (im1AlphaG + im2AlphaG)/blendMaskSum; 
			cv::Mat imBlendedR = (im1AlphaR + im2AlphaR)/blendMaskSum; 
			std::vector<cv::Mat> channelsBlended; 

			// merge back to 3 channel image 
			cv::Mat merged; 

			// convert to 8UC3 
			cv::Mat merged8U; 

			return merged8U; 

int main(int argc, char* argv[]) 

			cv::Mat i1 = cv::imread(argv[1]);					// Read First image from camera. 
			cv::Mat i2 = cv::imread(argv[2]);					// Read second image from camera. 

			cv::Mat m1 = cv::imread(argv[1],CV_LOAD_IMAGE_GRAYSCALE);		// COnvert 1st image to grayscale 
			cv::Mat m2 = cv::imread(argv[2],CV_LOAD_IMAGE_GRAYSCALE);		// COnvert 2nd image to grayscale 

			    // works too, for background near white 
			    //  m1 = m1 < 220; 
			    //  m2 = m2 < 220; 

			//    edited:  using OTSU thresholding. If not working you have to create your own masks with a better technique 

			cv::Mat out = computeAlphaBlending(i1,m1,i2,m2);		// Compute alpha blending using 4 images. i.e. 												   original capture image & gray scale image 
			imshow("result",out);						//Show result. 
			cv::waitKey(-1);						//  delay. 
			return 0; 

Multiband blending and Feather blending were included in OpenCV implementation,but this one is not,also unlike other two here there is restriction to use images of same dimension or else OpenCV will throw exceptions.

ComputeAlphaBlending () function takes four arguments

Two Images and two threshold masks.

It gives pretty good results but it is extremely slow .If you need to apply once or a image then its perfect choice but if you need something to be applied on each frame of video consider Multiband Blending.

Please go through comments for detailed explanation.

Source Images


Next part of the series here.


Like what you read?
Do you think you have knowledge to fetch Overseas Jobs and Projects?
Register here and start posting...
The following two tabs change content below.

Embedded Design Engineer by profession .Extremly intrested in Emedded Systems,Linux and Electronics.I truely believe in open source , join me in this mission by registering .

How to Integrate Google “No CAPTCHA reCAPTCHA” on Your Website

111 Comment

  1. Actually, addWeighted function from openCV does alpha blending.

  2. Yes,What we are doing here is the same, but instead of giving the weights of pixels arbitrarily we calculate the weights “intelligently” which considers the borders of image and distance from the border, from what I learn using simple addWeighted is called as linear blending,I will shortly post the results to compare with this one.If you have you can update as well.
    Thanks .

Leave a Reply