Categorizing trail camera nature photos with vision AI (2024 guide for a Python Jupyter notebook using the REST API)

September 5, 2024

If you have not already, read our previous guide where we cover creating a Python FastAPI service to recognize furniture items. In this post, we will skip over some of the window dressing and focus on the exciting stuff, how to use our new universal detector! In this post though, we will be showing you how to interface with the Dragoneye API using the REST API rather than our Python SDK.

You can grab the code resources for this post from our GitHub here as well.

Getting started with Jupyter notebooks

If you're not familiar with Jupyter notebooks already, they are an interactive environment for writing and running code. They are particularly useful because they allow you to combine code, visualizations, and narrative text in a single file. This makes them an excellent tool for exploring data and experimenting quickly with code.

Setting Up Visual Studio Code for Jupyter

If you're using Visual Studio Code (VS Code), setting up Jupyter notebooks is straightforward. These instructions assume that you already have VS Code and Python installed.

1. Install the Python Extension

Open VS Code, go to the Extensions view by clicking on the Extensions icon in the Activity Bar on the side of the window, and search for "Python". Install the official Python extension provided by Microsoft.

2. Install Jupyter

In the VS Code terminal, you can install Jupyter by running the following command:

> pip install jupyter

This will install the Jupyter package, which is necessary for running notebooks.

3. Create a New Jupyter Notebook

Once Jupyter is installed, you can create a new notebook by going to the Command Palette (Ctrl+Shift+P) and typing "Jupyter: Create New Blank Notebook". Select this option, and a new notebook will open.

4. Start Coding

You can now start writing Python code in the cells and run them individually by clicking the "Run" button or pressing Shift+Enter.

5. Save Your Work

You can save your notebook by clicking "File" > "Save As" and giving your notebook a name with the .ipynb extension.

With these steps, you're ready to begin using Jupyter notebooks in Visual Studio Code.

Creating a model

To create your first Recognize Anything model, you will first need to create a Dragoneye account if you haven't done so already. You can sign up easily through Google or Github here.

Once you are logged in, navigate to the Recognize Anything tab on the left hand side of the screen, or use this link. Here you will see a button on the bottom right that says Create Model.

After pressing this button, give your model a name (e.g. animal_model).

Then enter the animals that we will be recognizing. The animals that will be appearing in our test images are:

  • Ram
  • Mountain lion
  • Deer

Feel free to add as many animals as you want to your model and you'll immediately be able to see the types of results you can expect!

When you are ready, press deploy.

Getting an API key

If you don't already have a key, you will need to create one first. You can see how in the docs here.

Integrating with the API

Now that we have our model deployed and an API key, we can begin writing some code!

First, let's fetch our images. I've selected some trail camera images, but feel free to select any images you like!

In addition, we can set up a couple of utils for working with those images.

If you've downloaded the code from our GitHub, follow along there, but if not, create your own notebook file called trail_cam.ipynb and start coding there!

import asyncio
import io
import aiohttp
from PIL import Image

image_urls = [
    # Credit: Scott Foster
    "https://neont.s3.amazonaws.com/wp-content/uploads/2017/11/Scott-Foster-Too-Pooped-to-Party.jpg",
    # Credit: Katie McPherson
    "https://storageciggallery.addons.business/13611/cig-cozy-gallery-6892vUf-Katie-McPherson-CA-hd.jpg?c=00",
    # Credit: Kyle Finger
    "https://storageciggallery.addons.business/13611/cig-cozy-gallery-6892nr8-KyleFingerWisconsin-hd.jpg?c=00",
    # Credit: Cayuga Nature Center
    "https://images.squarespace-cdn.com/content/v1/5d3cb13b96f9ac0001e89cf6/1578338468978-4U1LVRPRLU2B83UR77BX/Smith-Woods-trees.JPG",
]

async def fetch_image(session: aiohttp.ClientSession, url: str) -> Image.Image:
    async with session.get(url) as response:
        image_bytes = await response.read()
        return Image.open(io.BytesIO(image_bytes))

async def fetch_images(urls: list[str]) -> list[Image.Image]:
    async with aiohttp.ClientSession() as session:
        return await asyncio.gather(*[fetch_image(session, url) for url in urls])
    
def get_bytes_from_image(image: Image.Image) -> bytes:
    img_byte_arr = io.BytesIO()
    image.save(img_byte_arr, format='JPEG')
    return img_byte_arr.getvalue()

images = await fetch_images(image_urls)

Now if you press Run Cell, we should get our images! Once we have the images loaded, let's see what they look like. Using Jupyter notebooks, we can easily see these images inline:

from IPython.display import display # pyright: ignore

for image in images:
    display(image)

Now we can start using the Dragoneye API to detect the animals in these images! Here is where you'll need your API key as well as the name of the model we created above. You can find the name here, but if you followed along, it should be recognize_anything/animal_model.

from typing import Any

MODEL_NAME = "<YOUR_MODEL_NAME>"
AUTH_TOKEN = "<YOUR_AUTH_TOKEN>"

async def get_prediction(session: aiohttp.ClientSession, model_name: str, image: Image.Image) -> dict[str, Any]:
    async with session.post(
        "https://api.dragoneye.ai/predict",
        data={
            "image_file": get_bytes_from_image(image),
            "model_name": model_name,
        },
        headers={
            "Authorization": f"Bearer {AUTH_TOKEN}",
        }
    ) as response:
        return await response.json()


async with aiohttp.ClientSession() as session:
    prediction_results = await asyncio.gather(
        *[get_prediction(session, MODEL_NAME, image) for image in images]
    )
    images_with_prediction_results = zip(images, prediction_results)

With this, we should have our predictions! Now let's see what they look like. With the following code, we can pull out the objects that were detected in each image.

def all_objects_in_image(prediction_results: dict[str, Any]) -> set[str]:
    return set(
        [
            prediction["category"]["name"]
            for prediction in prediction_results["predictions"]
        ]
    )

images_with_objects_in_image = [
    (image, all_objects_in_image(prediction_results))
    for image, prediction_results in images_with_prediction_results
]

Now lastly, let's print it all out.

for image, objects_in_image in images_with_objects_in_image:
    print(f"Animals detected in image: {objects_in_image}")
    display(image)

If everything worked, you should see this:

And there it is, we have successfully detected animals in our trail camera images!

Next steps

This post showed you some of the basics of working with Dragoneye's Recognize Anything API, but the possibilities are endless. You can build custom image recognition models for any use case.

Stay tuned for our next post where we will extend this example to identify animals in videos in real-time!

Want to get started?
Tell us what you're building!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
or try it out now!
Try now