Hey everyone! This is lesson 4 in the Java for FTC series.
For this lesson you should be able to write an OpMode and know how to navigate the Android Studio interface, and know where the gradle files are.
What you will learn/what you can do by the end of this lesson:
- Know what OpenCV is
- Use OpenCV in your FTC Repo
- Analyze a camera frame to determine the placement of the duck on the barcode (2022 FTC Season)
OpenCV
OpenCV, or Opensource Computer Vision, is a library in either C++ or Python (we will use the C++ one) that allows users to analyze and perform actions on images from a camera or other source. For FTC, we will use a version of OpenCV that is called EasyOpenCV, a Java wrapper for the C++ library made specifically for FTC, which utilizes a compiled version of the C++ lib.

Installing OpenCV into your FTC Repo
First, open your repository/project in Android Studio.
We will follow the instructions here, which are repasted below for simplicity. Credit goes to OpenFTC.
First, Add the line
dependencies {
implementation 'org.openftc:easyopencv:1.5.0'
}
to the file “build.gradle” of the TeamCode Module:
Next, open “build.common.gradle”, and remove the strings “arm64-v8a” that are in that file as well as increasing the minSdkVersion to 24 (as opposed to 23), as follows:
Once you have done that, you need to click “Sync Gradle Files” as follows:

Then, you need to transfer a compiled binary of the OpenCV C++ library (found here) to your Robot Controller. Instructions differ from Windows and Mac (Linux is not very stable, some solutions are here.):
macOS 10.7 and Newer:
- Download and install Android File Transfer from Google.
- Connect your Android device by USB (Control Hub or Android phone; if phone make sure to enable MTP mode).
- Open Android File Transfer, and move the binary (LibOpenCvAndroid453.so) to the FIRST folder on the Control Hub or phone (should exist already)
- Then quit Android File Transfer and unplug your control hub or phone from the computer; you are all set!
Windows 10 (and probably 7 and 8):
- Plug in your android phone (must be in MTP mode) or control hub.
- Windows will ask you what you want to do with it, say open in file explorer.
- Move the LibOpenCvAndroid453.so to the FIRST folder (should already exist)
- Eject the Android device and you are all set!
Now we can start using openCV. Onto the next section!
OpenCV Sample OpMode; Using for FTC 2022 Season Example
We will show you how to use a USB webcam, which you must first plug into the phone or Control Hub and add it to the hardware map, like the other motors and sensors.
Each OpenCV OpMode has to have a specific init method preamble, as well as a pipeline class which processes the image from the camera.
Here is a sample java code for an opmode which initializes a camera called “WebcamMain”, and prints to telemetry the average amount of Luma, or Y in YCrCb color space in the image seen by the camera:
package org.firstinspires.ftc.teamcode;
import com.qualcomm.robotcore.eventloop.opmode.OpMode;
import org.firstinspires.ftc.robotcore.external.hardware.camera.WebcamName;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.imgproc.Imgproc;
import org.openftc.easyopencv.OpenCvCamera;
import org.openftc.easyopencv.OpenCvCameraFactory;
import org.openftc.easyopencv.OpenCvCameraRotation;
import org.openftc.easyopencv.OpenCvPipeline;
import org.openftc.easyopencv.OpenCvWebcam;
import java.util.ArrayList;
public class OpenCVExampleOpMode extends OpMode {
static final int STREAM_WIDTH = 1920; // modify for your camera
static final int STREAM_HEIGHT = 1080; // modify for your camera
OpenCvWebcam webcam;
SamplePipeline pipeline;
@Override
public void init() {
int cameraMonitorViewId = hardwareMap.appContext.getResources().getIdentifier("cameraMonitorViewId", "id", hardwareMap.appContext.getPackageName());
WebcamName webcamName = null;
webcamName = hardwareMap.get(WebcamName.class, "WebcamMain"); // put your camera's name here
webcam = OpenCvCameraFactory.getInstance().createWebcam(webcamName, cameraMonitorViewId);
pipeline = new SamplePipeline();
webcam.setPipeline(pipeline);
webcam.openCameraDeviceAsync(new OpenCvCamera.AsyncCameraOpenListener()
{
@Override
public void onOpened()
{
webcam.startStreaming(STREAM_WIDTH, STREAM_HEIGHT, OpenCvCameraRotation.UPRIGHT);
}
@Override
public void onError(int errorCode) {
telemetry.addData("Camera Failed","");
telemetry.update();
}
});
}
@Override
public void loop() {
telemetry.addData("Image Analysis:",pipeline.getAnalysis());
telemetry.update();
}
}
class SamplePipeline extends OpenCvPipeline {
Mat YCrCb = new Mat();
Mat Y = new Mat();
int avg;
/*
* This function takes the RGB frame, converts to YCrCb,
* and extracts the Y channel to the 'Y' variable
*/
void inputToY(Mat input) {
Imgproc.cvtColor(input, YCrCb, Imgproc.COLOR_RGB2YCrCb);
ArrayList<Mat> yCrCbChannels = new ArrayList<Mat>(3);
Core.split(YCrCb, yCrCbChannels);
Y = yCrCbChannels.get(0);
}
@Override
public void init(Mat firstFrame) {
inputToY(firstFrame);
}
@Override
public Mat processFrame(Mat input) {
inputToY(input);
System.out.println("processing requested");
avg = (int) Core.mean(Y).val[0];
YCrCb.release(); // don't leak memory!
Y.release(); // don't leak memory!
return input;
}
public int getAnalysis() {
return avg;
}
}
Note in the 8th to last line we release the “Y” and “YCrCb” objects, which is necessary to prevent memory leakage as this is a java wrapper for a C++ library. In C++, we need to release objects’ memory space (whereas in java it is done for us automatically). If you don’t do this, the first few frames will work, and then the app will crash. Thus, it is bad to not put that!! Don’t forget it.
Let’s take a closer look at what we can do with this API. Our pipeline currently looks at the whole camera view, which doesn’t quite help us. We want to look only at a small part, specifically rectangular. That means we want to create a subset of the image to look at. We also can draw this on the camera image. We change the pipeline class as follows:
class SamplePipeline extends OpenCvPipeline {
Mat YCrCb = new Mat();
Mat Y = new Mat();
Mat RectA_Y = new Mat();
int avg;
int avgA;
static final int STREAM_WIDTH = 1920; // modify for your camera
static final int STREAM_HEIGHT = 1080; // modify for your camera
static final int WidthRectA = 130;
static final int HeightRectA = 110;
static final Point RectATopLeftAnchor = new Point((STREAM_WIDTH - WidthRectA) / 2 + 300, ((STREAM_HEIGHT - HeightRectA) / 2) - 100);
Point RectATLCorner = new Point(
RectATopLeftAnchor.x,
RectATopLeftAnchor.y);
Point RectABRCorner = new Point(
RectATopLeftAnchor.x + WidthRectA,
RectATopLeftAnchor.y + HeightRectA);
/*
* This function takes the RGB frame, converts to YCrCb,
* and extracts the Y channel to the 'Y' variable
*/
void inputToY(Mat input) {
Imgproc.cvtColor(input, YCrCb, Imgproc.COLOR_RGB2YCrCb);
ArrayList<Mat> yCrCbChannels = new ArrayList<Mat>(3);
Core.split(YCrCb, yCrCbChannels);
Y = yCrCbChannels.get(0);
}
@Override
public void init(Mat firstFrame) {
inputToY(firstFrame);
RectA_Y = Y.submat(new Rect(RectATLCorner, RectABRCorner));
}
@Override
public Mat processFrame(Mat input) {
inputToY(input);
System.out.println("processing requested");
avg = (int) Core.mean(Y).val[0];
avgA = (int) Core.mean(RectA_Y).val[0];
YCrCb.release(); // don't leak memory!
Y.release(); // don't leak memory!
Imgproc.rectangle( // rings
input, // Buffer to draw on
RectATLCorner, // First point which defines the rectangle
RectABRCorner, // Second point which defines the rectangle
new Scalar(0,0,255), // The color the rectangle is drawn in
2); // Thickness of the rectangle lines
return input;
}
public int getAnalysis() {
return avg;
}
public int getRectA_Analysis() {
return avgA;
}
}
In the loop method of the opmode, we add to the telemetry the function getRectA_analysis()
in order to see it.
What this does is draws on the camera view a blue rectangle with the dimensions we specified earlier as well as the top left corner coordinates. We make a submat with our selected region, and get the average of that. If we make 2 other rectangles (for 3 total), and arrange them to point at the different squares of the barcode, then we can look at the luma (or basically intensity in a black/white image) to tell which is the yellowest (and as a result brightest) rectangle, and thus know where the duck is.
© The RoboMentors (Marc and Anne-Sarah)