How to recognize clothing items with vision AI (2024 guide for a Javascript / React app)

May 21, 2024

Building smart AI features has been all the rage lately. The launch of tools such as GPT, Stable Diffusion and LlaMa has ushered in a new wave of products that leverage AI.

In fashion, one of the big uses of AI has been power smart features using images. We've seen uses from Poshmark enabling visual search for similar items in their store, to apps that teach people how to describe their clothes with fashion lingo, to virtual try-on tech that allows customers to be their own models.

However, while there have probably never been more AI tools available than today, getting started with building can still be a daunting task.

Today, we'll take a look at how you can quickly add vision AI functionality to your Javascript app using Dragoneye's Fashion API. We'll build out a prototype app in React that allows folks to upload an image of clothing and displays what the clothing type is as well as any details.

By the end, we'll be able to get results like this:

We'll walk through how to create the React app, upload the image and get results from the Dragoneye API, and how to display the output nicely.

If you want to follow along with the complete code, we have all of the code in our examples repo here!

Prerequisite - Installing Node.js and npm

If you haven't already, you'll want to get started by installing Node.js and npm. The most up to date guide for this is from npm here.

They recommend using a Node version manager to manage the installations on your system, which I recommend too. It'll help keep your development environment cleaner on an ongoing basis.

Create the base React app

We'll start out by creating a React app using the create-react-app package. In your command line, change to your directory of choice and run the following:

> npx create-react-app dragoneye-clothing-react-app

This will create a folder called dragoneye-clothing-react-app with a basic template app.

Now let's start up the React app to see what the basic template looks like:

> cd dragoneye-clothing-react-app
> npm run start

Compiled successfully!

You can now view dragoneye-clothing-react-app in the browser.

  Local:            http://localhost:3000

Note that the development build is not optimized.
To create a production build, use npm run build.

This should automatically open http://localhost:3000 in your default browser. If not, open it in a browser.

You should see the default template like so:

Perfect, now we can get started adding the functionality we need!

Add an image picker

Let's add a button that allows folks to select the image that they want to predict. First, let's update the UI to get rid of the existing default content and add a button. In our App.js file:

// App.js

import "./App.css";

function App() {
  return (
    <div className="App">
      <h1>Dragoneye App</h1>
      <button className="pickerButton">Select an image</button>
    </div>
  );
}

export default App;

Let's also update the style in App.css:

/* App.css */

body {
  background-color: darkslategray;
  color: white;
}

p {
margin: 0;
}

.App {
  text-align: center;
  display: flex;
  flex-direction: column;
  align-items: center;
  justify-content: center;
  gap: 24px;
}

.pickerButton {
  background-color: white;
  color: darkslategray;
  border: none;
  padding: 10px 20px;
  border-radius: 20px;
  cursor: pointer;
}

In browser, you should now see this:

Nice! Now, let's hook up the logic so that clicking the button actually does something! Thankfully, this is fairly easy with existing file picker packages such as use-file-picker.

Let's install the package:

> npm install use-file-picker

Now let's use it in code so that our app shows the image we pick!

In App.js:

// App.js

import { useFilePicker } from "use-file-picker";
import "./App.css";
import { useState } from "react";

function App() {
  const [imageFileContent, setImageFileContent] = useState(null);

  const { openFilePicker } = useFilePicker({
    readAs: "DataURL",
    accept: "image/*",
    multiple: false,
    onFilesSuccessfullySelected: async ({ plainFiles, filesContent }) => {
      setImageFileContent(filesContent[0]);
    },
  });

  return (
    <div className="App">
      <h1>Dragoneye App</h1>
      <button onClick={openFilePicker} className="pickerButton">
        Select an image
      </button>
      {imageFileContent ? (
        <img
          key={imageFileContent.name}
          src={imageFileContent.content}
          alt={imageFileContent.name}
          style={{ maxWidth: "100%", maxHeight: 480 }}
        />
      ) : null}
    </div>
  );
}

export default App;

Now, we can use the button to select an image and have it displayed in our app.

Hooking it up to the Dragoneye API

The last thing we need to do call the Dragoneye API to fetch clothing tag predictions using the latest computer vision AI model.

To do this, we can use the dragoneye-node package. This lightweight package handles calling the Dragoneye API and handling the result.

Let's install it:

> npm install dragoneye-node

To use the package, we first need to create the client once when we start the app. For this, we can use the useMemo React hook.

// App.js 

import { Dragoneye } from "dragoneye-node";
import { useFilePicker } from "use-file-picker";
import "./App.css";
import { useMemo, useState } from "react";

function App() {
  const dragoneyeClient = useMemo(
    () =>
      new Dragoneye({
        apiKey: "YOUR_API_KEY",
      }),
    []
  );

  // continued...

Let also update the onFilesSuccessfullySelected callback to call the Dragoneye classifications API and save the results returned.

// App.js

// continued...

  const { openFilePicker } = useFilePicker({
    readAs: "DataURL",
    accept: "image/*",
    multiple: false,
    onFilesSuccessfullySelected: async ({ plainFiles, filesContent }) => {
      setImageFileContent(filesContent[0]);
      const results = await dragoneyeClient.classification.predict({
        image: {
          blob: plainFiles[0],
        },
        modelName: "dragoneye/fashion",
      });
      setImagePredictions(results);
    },
  });


// continued...

Note that we specify the image as a blob, and we also specify the modelName for the Dragoneye fashion model dragoneye/fashion.

Once we have the results, the easiest thing to do is just to display them as a JSON object!

import { Dragoneye } from "dragoneye-node";
import { useFilePicker } from "use-file-picker";
import "./App.css";
import { useMemo, useState } from "react";
import { PredictionResult } from "./components/prediction";

function App() {
  const dragoneyeClient = useMemo(
    () =>
      new Dragoneye({
        apiKey: "YOUR_API_KEY",
      }),
    []
  );

  const [imageFileContent, setImageFileContent] = useState(null);
  const [imagePredictions, setImagePredictions] = useState(null);

  const { openFilePicker } = useFilePicker({
    readAs: "DataURL",
    accept: "image/*",
    multiple: false,
    onFilesSuccessfullySelected: async ({ plainFiles, filesContent }) => {
      setImageFileContent(filesContent[0]);
      const results = await dragoneyeClient.classification.predict({
        image: {
          blob: plainFiles[0],
        },
        modelName: "dragoneye/fashion",
      });
      setImagePredictions(results);
    },
  });

  return (
    <div className="App">
      <h1>Dragoneye App</h1>
      <button onClick={openFilePicker} className="pickerButton">
        Select an image
      </button>
      {imageFileContent ? (
        <img
          key={imageFileContent.name}
          src={imageFileContent.content}
          alt={imageFileContent.name}
          style={{ maxWidth: "100%", maxHeight: 480 }}
        />
      ) : null}
      {imagePredictions
        ? imagePredictions["predictions"].map((prediction) => (
            <PredictionResult prediction={prediction} />
          ))
        : null}
    </div>
  );
}

export default App;

Now in the browser you should see the JSON results with the predictions - fantastic!

Let's unpack that JSON result

Time to take a closer look at the results that we get back from the API and write a few components to display each part of the results.

{
  "predictions": [
    {
      "normalizedBbox": ...
      "category": ...
      "traits": ...
    },
    {
      "normalizedBbox": ...
      "category": ...
      "traits": ...
    },
    ...
  ]
}

The predictions field in the JSON response contains a list of predictions, one for each item of clothing that Dragoneye detects in the image.

Now what's in each of the predictions? There are three fields: normalizedBbox, category, and traits.

normalizedBbox

The normalizedBbox field contains the location of the item in the image in the form of a bounding box (think a rectangle).  

The format of the normalizedBbox is an array with 4 floats - [x_min, y_min, x_max, y_max]. The values are the minimum and maximum values in the x and y axis, scaled to between 0 to 1 relative to the image dimensions.

See the following image for a visual representation for a sample normalized bounding box.

Let's create a component in a new file prediction.js that will parse this out and display it.

// prediction.js

function NormalizedBbox({ normalizedBbox }) {
  return (
    <div>
      Normalized bbox: X_min: {normalizedBbox[0]}, X_max: {normalizedBbox[2]},
      Y_min: {normalizedBbox[1]}, Y_max: {normalizedBbox[3]}
    </div>
  );
}

category

The category field contains the prediction for the category for the item of clothing. These predictions are predictions into the Dragoneye taxonomy which you can see here.

These TaxonPredictions have the following structure:

{
  "id": ...,               // The ID in the Dragoneye taxonomy
  "type": ...,             // Either "category" or "trait"
  "name": ...,
  "displayName": ...,      // The name that is formatted for display
  "score": ...,            /* Prediction score. May be unavailable if the
                                 prediction is for one of the child taxons */
  "children": [            /* TaxonPredictions for child taxons
    ...                          For example, the "jacket" TaxonPrediction
    ...                          might have a child "bomber_jacket"
    ...                          TaxonPrediction */
  ]
}

Here is an example:

"category": {
  "id": 959289656,
  "type": "category",
  "name": "knit_top",
  "displayName": "Sweater Group",
  "score": null,
  "children": [
    {
      "id": 1626616160,
      "type": "category",
      "name": "sweater",
      "displayName": "Sweater",
      "score": 0.9980767965316772,
      "children": []
    }
  ]
}

Let's add some components to parse out the category prediction and display it.

// prediction.js

function TaxonPrediction({ prediction }) {
  const { id, score, displayName, children } = prediction;
  return (
    <div>
      {[
        `ID: ${id}`,
        `Name: ${displayName}`,
        score ? `Score: ${score.toFixed(2)}` : null,
      ].join("\n")}
      {children.length > 0 ? (
        <>
          <p>children</p>
          {children.map((child) => (
            <TaxonPrediction prediction={child} />
          ))}
        </>
      ) : null}
    </div>
  );
}

function CategoryPrediction({ prediction }) {
  return <TaxonPrediction prediction={prediction} />;
}

traits

The last field in a prediction is traits. This contains predictions for different trait types that the item of clothing has. For example, these might be the sleeve style, neckline, or pant length.

The field is a list of predictions for each trait types, with the following format:

"traits": [
  {
    "id": ...,               // The ID in the Dragoneye taxonomy
    "name": ...,             
    "displayName": ...,      // The name that is formatted for display
    "taxons": [              // A list of TaxonPredictions - see above
      ...,
    ]
  },
]

Here is an example:

"traits": [
  {
    "id": 1737788599,
    "name": "sleeve_style",
    "displayName": "Sleeve Style",
    "taxons": [
      {
        "id": 161086903,
        "type": "trait",
        "name": "raglan_sleeve",
        "displayName": "Raglan Sleeve",
        "score": 0.9592581391334534,
        "children": []
      }
    ]
  },
  {
    "id": 3516172983,
    "name": "top_fit",
    "displayName": "Top Fit",
    "taxons": [
      {
        "id": 3516172983,
        "type": "trait",
        "name": "top_fit",
        "displayName": "Loose",
        "score": null,
        "children": [
          {
            "id": 1306038455,
            "type": "trait",
            "name": "oversized",
            "displayName": "Oversized",
            "score": 0.9026983380317688,
            "children": []
          }
        ]
      }
    ]
  },
  {
    "id": 1930479351,
    "name": "neckline",
    "displayName": "Neckline",
    "taxons": [
      {
        "id": 482266030,
        "type": "trait",
        "name": "crew_neck",
        "displayName": "Crew Neck",
        "score": 0.8209400177001953,
        "children": []
      }
    ]
  },
  ...
]

Let's add some components to parse out the traits prediction and display it.

// prediction.js

function TraitTypePrediction({ prediction }) {
  const { id, displayName, taxons } = prediction;

  return (
    <div>
      {[`ID: ${id}`, `Name: ${displayName}`].join("\n")}
      {taxons.length > 0 ? (
        <>
          <p>taxon predictions</p>
          {taxons.map((taxon) => (
            <TaxonPrediction prediction={taxon} />
          ))}
        </>
      ) : null}
    </div>
  );
}

Lastly, let's add a component called PredictionResult that takes the entire prediction object for one item and displays all of the fields together.

// prediction.js

export function PredictionResult({ prediction }) {
  return (
    <div
      style={{
        display: "grid",
        gridTemplateColumns: "auto 1fr",
        gap: 16,
      }}
    >
      <p className="label">Normalized Bbox:</p>
      <NormalizedBbox normalizedBbox={prediction.normalizedBbox} />
      <p className="label">Category: </p>
      <CategoryPrediction prediction={prediction.category} />
      <p className="label">Traits:</p>
      <div
        style={{
          display: "grid",
          gridTemplateColumns: "1fr",
          gap: 16,
        }}
      >
        {prediction.traits.map((trait) => (
          <TraitTypePrediction prediction={trait} />
        ))}
      </div>
    </div>
  );
}

Viewing this in the browser, we get the following - pretty cool!

Dress it up

The last thing we'll do is spruce up our UI for the prediction results. We'll skip over these more quickly, but essentially we want to format the normalizedBbox, category and traits fields in the response so that they are more readable.

We'll first write some building block components in components/display.js and style them in components/display.css.

// components/display.js

import "./display.css";

export function Label({ label, value }) {
  return (
    <div style={{ display: "contents" }}>
      <p style={{ fontWeight: "bold" }}>{`${label}:`}</p>
      <p>{value}</p>
    </div>
  );
}

export function LabelGroup({ children }) {
  return (
    <div className="labelGroupWrapper">
      <div className="labelGroup">{children}</div>
    </div>
  );
}

export function DataBlock({ children }) {
  return <div className="dataBlock">{children}</div>;
}
/* components/display.css */

.labelGroupWrapper {
  background-color: white;
  padding-inline: 24px;
  padding-block: 12px;
  border-radius: 8px;
  font-size: 16px;
  display: flex;
  flex-direction: row;
}

.labelGroup {
  display: grid;
  grid-template-columns: auto 1fr;
  gap: 8px;
  column-gap: 8px;
  row-gap: 4px;
  justify-items: start;
  align-items: center;
  margin: auto;
}

.dataBlock {
  background-color: #ddd;
  padding: 8px;
  border-radius: 12px;
  display: flex;
  flex-direction: row;
  gap: 16px;
}

Next, we'll create custom components for each of the fields in prediction.js and style them in prediction.css.

// components/prediction.js

import "./prediction.css";
import { LabelGroup, Label, DataBlock } from "./display";

function AttributePrediction({ prediction }) {
  const { id, score, displayName, children } = prediction;
  return (
    <div
      style={{
        display: "flex",
        flexDirection: "row",
        alignItems: "stretch",
        gap: 16,
      }}
    >
      <LabelGroup>
        <Label label="ID" value={id} />
        <Label label="Name" value={displayName} />
        {score ? <Label label="Score" value={score.toFixed(2)} /> : null}
      </LabelGroup>
      {children.length > 0 ? (
        <>
          <p style={{ alignSelf: "center" }}></p>
          <div style={{ display: "grid", gridTemplateColumns: "1fr", gap: 16 }}>
            {children.map((child) => (
              <AttributePrediction prediction={child} />
            ))}
          </div>
        </>
      ) : null}
    </div>
  );
}

function CategoryPrediction({ prediction }) {
  return (
    <DataBlock>
      <AttributePrediction prediction={prediction} />
    </DataBlock>
  );
}

function TraitTypePrediction({ prediction }) {
  const { id, displayName, taxons } = prediction;

  return (
    <DataBlock>
      <LabelGroup>
        <Label label="ID" value={id} />
        <Label label="Name" value={displayName} />
      </LabelGroup>
      {taxons.length > 0 ? (
        <>
          <p style={{ alignSelf: "center" }}></p>
          <div
            style={{
              display: "grid",
              gridTemplateColumns: "1fr",
              gap: 16,
            }}
          >
            {taxons.map((taxon) => (
              <AttributePrediction prediction={taxon} />
            ))}
          </div>
        </>
      ) : null}
    </DataBlock>
  );
}

function NormalizedBbox({ normalizedBbox }) {
  return (
    <DataBlock>
      <LabelGroup>
        <Label
          label="X"
          value={`${normalizedBbox[0].toFixed(2)} ↔ ${normalizedBbox[2].toFixed(
            2
          )}`}
        />
        <Label
          label="Y"
          value={`${normalizedBbox[1].toFixed(2)} ↔ ${normalizedBbox[3].toFixed(
            2
          )}`}
        />
      </LabelGroup>
    </DataBlock>
  );
}

export function PredictionResult({ prediction }) {
  return (
    <div
      style={{
        display: "grid",
        gridTemplateColumns: "auto 1fr",
        gap: 16,
        justifyItems: "start",
        backgroundColor: "#f0f0f0",
        color: "#555",
        borderRadius: 12,
        fontSize: 20,
        padding: 12,
        marginTop: 12,
      }}
    >
      <p className="label" style={{ justifySelf: "end" }}>
        Normalized Bbox:
      </p>
      <NormalizedBbox normalizedBbox={prediction.normalizedBbox} />
      <p className="label" style={{ justifySelf: "end" }}>
        Category:{" "}
      </p>
      <CategoryPrediction prediction={prediction.category} />
      <p className="label" style={{ justifySelf: "end" }}>
        Traits:
      </p>
      <div
        style={{
          display: "grid",
          gridTemplateColumns: "1fr",
          gap: 16,
          justifyItems: "start",
        }}
      >
        {prediction.traits.map((trait) => (
          <TraitTypePrediction prediction={trait} />
        ))}
      </div>
    </div>
  );
}
/* components/prediction.css */

.label {
  font-size: 20px;
  font-weight: bold;
  margin-top: 12px;
}

Lastly, let's update the bottom of our component in App.js to use these new components.

// App.js
      // continued...
      
      {imageFileContent ? (
        <img
          key={imageFileContent.name}
          src={imageFileContent.content}
          alt={imageFileContent.name}
          style={{ maxWidth: "100%", maxHeight: 480 }}
        />
      ) : null}
      {imagePredictions
        ? imagePredictions["predictions"].map((prediction) => (
            <PredictionResult prediction={prediction} />
          ))
        : null}
    </div>
  );
}

export default App;

Let's take a look at our final results:

Pretty neat!

Summary

In this article, we've seen that we can quickly:

  1. Create a React app that selects images and display them
  2. Leverage the dragoneye-node package to use the Dragoneye API
  3. Get and display the results from the API in the React app

From here, there are so many possibilities for where you could take this app! You could leverage the results from Dragoneye to power site search, look for similar items, or help you in cataloging inventory.

If you want to get started, you can find all of the code for this article in our examples repo, and you can get signed up for the Dragoneye API here. Currently we haven't implemented API credits in the Free tier yet, but send us an email at support@dragoneye.ai and we're super happy to add some for you manually to get started (mention the ⚾️ emoji in your email).

If you found this helpful, we also wrote a follow up guide to this one on how you can integrate the Dragoneye API into your backend services as well as you look to productionize these features. Check out our guide here on How to integrate vision AI to your backend service.

Happy building!

Want to get started?
Tell us what you're building!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
or try it out now!
Try now