An Interest In:
Web News this Week
- April 21, 2024
- April 20, 2024
- April 19, 2024
- April 18, 2024
- April 17, 2024
- April 16, 2024
- April 15, 2024
Face detection with TensorflowReact TypeScript
Hello guys,
I have developed an application with face detection, that applies a mask automatically to your face in real time.
In this article, I will explain how to develop this application.
DEMOhttps://mask-app-one.vercel.app/
githubhttps://github.com/YuikoIto/mask-app
- If the mask does not show up even after some seconds, please reload it.
Setup react application and install react-webcam
$ npx create-react-app face-mask-app --template typescript$ yarn add react-webcam @types/react-webcam
Then, try setting up web camera.
// App.tsximport { useRef } from "react";import "./App.css";import Webcam from "react-webcam";const App = () => { const webcam = useRef<Webcam>(null); return ( <div className="App"> <header className="header"> <div className="title">face mask App</div> </header> <Webcam audio={false} ref={webcam} style={{ position: "absolute", margin: "auto", textAlign: "center", top: 100, left: 0, right: 0, }} /> </div> );}export default App;
yarn start
and access http://localhost:3000/.
Yay! Web camera is now available.
Try Face detection using Tensorflow
Here, we are using this model. https://github.com/tensorflow/tfjs-models/tree/master/face-landmarks-detection
$ yarn add @tensorflow-models/face-landmarks-detection @tensorflow/tfjs-core @tensorflow/tfjs-converter @tensorflow/tfjs-backend-webgl
- If you don't use TypeScript, you don't have to install all of them. Install
@tensorflow/tfjs
instead of@tensorflow/tfjs-core
,@tensorflow/tfjs-converter
, and@tensorflow/tfjs-backend-webgl
.
// App.tsximport "@tensorflow/tfjs-core";import "@tensorflow/tfjs-converter";import "@tensorflow/tfjs-backend-webgl";import * as faceLandmarksDetection from "@tensorflow-models/face-landmarks-detection";import { MediaPipeFaceMesh } from "@tensorflow-models/face-landmarks-detection/dist/types";const App = () => { const webcam = useRef<Webcam>(null); const useFaceDetect = async () => { const model = await faceLandmarksDetection.load( faceLandmarksDetection.SupportedPackages.mediapipeFacemesh ); detect(model); }; const detect = async (model: MediaPipeFaceMesh) => { if (!webcam.current) return; const webcamCurrent = webcam.current as any; // if the video is not completely uploaded, just return. if (webcamCurrent.video.readyState !== 4) { return; } const video = webcamCurrent.video; const predictions = await model.estimateFaces({ input: video, }); if (predictions.length) { console.log(predictions); } }; useFaceDetect();
Check logs.
OK, seems good.
Setup canvas to overlay the mask on your face
Add <canvas>
under <Webcam>
.
//App.tsxconst App = () => { const webcam = useRef<Webcam>(null); const canvas = useRef<HTMLCanvasElement>(null); return ( <div className="App"> <header className="header"> <div className="title">face mask App</div> </header> <Webcam audio={false} ref={webcam} /> <canvas ref={canvas} />
Match the size of the canvas with the video.
const videoWidth = webcamCurrent.video.videoWidth; const videoHeight = webcamCurrent.video.videoHeight; canvas.current.width = videoWidth; canvas.current.height = videoHeight;
Then, let's see this map and check where we should fill out.
By this map, No. 195 is around the nose. So set this point as the fulcrum. Let's draw a mask easily by using beginPath()closePath()
.
// mask.tsimport { AnnotatedPrediction } from "@tensorflow-models/face-landmarks-detection/dist/mediapipe-facemesh";import { Coord2D, Coords3D,} from "@tensorflow-models/face-landmarks-detection/dist/mediapipe-facemesh/util";const drawMask = ( ctx: CanvasRenderingContext2D, keypoints: Coords3D, distance: number) => { const points = [ 93, 132, 58, 172, 136, 150, 149, 176, 148, 152, 377, 400, 378, 379, 365, 397, 288, 361, 323, ]; ctx.moveTo(keypoints[195][0], keypoints[195][1]); for (let i = 0; i < points.length; i++) { if (i < points.length / 2) { ctx.lineTo( keypoints[points[i]][0] - distance, keypoints[points[i]][1] + distance ); } else { ctx.lineTo( keypoints[points[i]][0] + distance, keypoints[points[i]][1] + distance ); } }};export const draw = ( predictions: AnnotatedPrediction[], ctx: CanvasRenderingContext2D, width: number, height: number) => { if (predictions.length > 0) { predictions.forEach((prediction: AnnotatedPrediction) => { const keypoints = prediction.scaledMesh; const boundingBox = prediction.boundingBox; const bottomRight = boundingBox.bottomRight as Coord2D; const topLeft = boundingBox.topLeft as Coord2D; // make the drawing mask larger a bit const distance = Math.sqrt( Math.pow(bottomRight[0] - topLeft[0], 2) + Math.pow(topLeft[1] - topLeft[1], 2) ) * 0.02; ctx.clearRect(0, 0, width, height); ctx.fillStyle = "black"; ctx.save(); ctx.beginPath(); drawMask(ctx, keypoints as Coords3D, distance); ctx.closePath(); ctx.fill(); ctx.restore(); }); }};
Import this draw
function in App.tsx and use it.
const ctx = canvas.current.getContext("2d") as CanvasRenderingContext2D; requestAnimationFrame(() => { draw(predictions, ctx, videoWidth, videoHeight); });
Finish!
Thanks for reading.
This is my first time to use Tensorflow but thanks to a good README of the official github repository, I can make a small application easily. I will develop more with using Tensorflow
**
Please send me a message if you need.
References
Original Link: https://dev.to/yuiko/face-detection-by-using-tensorflow-react-typescript-3dn5
Dev To
An online community for sharing and discovering great ideas, having debates, and making friendsMore About this Source Visit Dev To