Your Web News in One Place

Help Webnuz

Referal links:

Sign up for GreenGeeks web hosting
October 24, 2020 01:43 am GMT

Making Laundry Less Terrible with Machine Learning

Since my son was born, I've been doing a lot of laundry. An infant's laundry needs are small (well the clothes are) but frequent so to be efficient you might as well do the whole family's the laundry. When you do enough laundry, you'll notice those little tags. Each garment has a set of shapes - triangles, circles, squares, and other shapes - that intend to describe how to care for it.

Laundry care tags - Wash at 60 degrees Celsius, Do not bleach

Well last year, I got obsessed and started to learn how to interpret each one. It's not complicated but it is not obvious either. So I set out to make it easier on myself and others. It's easy enough to create a pocket guide for laundry tags. But, I thought it would be neat to be able to take a picture of the care tag and present which tag it matched. This seemed like an opportunity to play around with machine learning and computer vision tools as well.

Each time I did laundry I would take photos of the tags. I would then organize them based on which type of care tag it was (Wash at 30 degrees Celsius vs Non-Chlorine Bleach). Using Firebase's Automl Vision Edge, I could then create a machine learning model.
Vision edge generates a TFlite model to use to classify an image. To use the model, I then created an Android app and an iOS app to consume results from the camera.

Here's what the Kotlin code looks like to process an image

override fun onImageSaved(file: File) {    val uri = file.toUri()    if (uri != null) {        // loads the machine learning model        val localModel = FirebaseAutoMLLocalModel.Builder()            .setAssetFilePath("manifest.json")            .build()        val options = FirebaseVisionOnDeviceAutoMLImageLabelerOptions.Builder(localModel)            .setConfidenceThreshold(0.5f)  // Sets the appropriate confidence threshold            .build()        // loads the labeler        val labeler = FirebaseVision.getInstance().getOnDeviceAutoMLImageLabeler(options)        val image: FirebaseVisionImage        try {            // loads the image as a firebase vision image            image = FirebaseVisionImage.fromFilePath(applicationContext, uri)            labeler.processImage(image)                .addOnSuccessListener { labels ->                    for (label in labels) {                        val text = label.text                        val confidence = label.confidence                        Log.d("RECOGNITION", text + " " + confidence)                    }                    // do a thing with the confidence and label                }                .addOnFailureListener { e ->                    println(e.message)                }        } catch (e: IOException) {            e.printStackTrace()        }    }}
Enter fullscreen mode Exit fullscreen mode

The code in Swift is very similar

        let localModel = AutoMLLocalModel(manifestPath: manifestPath)        let options = VisionOnDeviceAutoMLImageLabelerOptions(localModel: localModel)        options.confidenceThreshold = 0.05          let labeler = Vision.vision().onDeviceAutoMLImageLabeler(options: options)        var config = YPImagePickerConfiguration()        config.showsCrop = .rectangle(ratio: 1.0)        config.showsPhotoFilters = false        let picker = YPImagePicker(configuration: config)        picker.didFinishPicking { [unowned picker] items, _ in            if let photo = items.singlePhoto {                self.image = photo.image                let image = VisionImage(image: photo.image)                self._didFinishPicking!(self.self)                labeler.process(image) { labels, error in                    for label in labels {                        self.result = GuideResult(label:label.text, confidence: label.confidence ?? 0)                        self._setResult!(self.self)                        break                    }                }            }            picker.dismiss(animated: true, completion: nil)            self.dismiss(animated: true, completion: nil)        }
Enter fullscreen mode Exit fullscreen mode

This project also leveraged SwiftUI so I ended up building an Apple Watch and iPad version as well

I'm really proud with how it came out. I released both apps under the LaundrySnap name in the respective app stores back in May but I just launched it on Product Hunt Today!

Product Hunt Link
Apple App Store
Google Play Store

Screenshot of iOS app


Original Link: https://dev.to/securingsincity/making-laundry-less-terrible-with-machine-learning-3ih6

Share this article:    Share on Facebook
View Full Article

Dev To

An online community for sharing and discovering great ideas, having debates, and making friends

More About this Source Visit Dev To