Part 107: Give your Zabbix a computer vision and monitor bananas

What's up, home? part 107 cover image

Those of you who have read my blog for long enough might remember how I showed how to monitor the color of a banana. This, of course, motivated by Steve Destivelle the Banana Guy. My example back then was a quick hack to just show the potential and compared to what's possible today with the inevitable march of LLMs, it was very primitive.

So, it's time for another quick kludge just to showcase you the potential of computer vision and do another round of banana monitoring. Let's use Ollama Vision for that. It's a model collection that can inspect any image and tell you what it sees in the image. It's amazing how easy these things are now to use.

Getting started

To get started, install Ollama. Go on, I'll wait.

After that, jump to command line and install 7, 13 or 34 billion parameters LLaVA model -- I chose 7B as for demo purposes it surely is good enough and I'm not sure how much bigger model my MacBook with 16 GB RAM could take anyway. Do the model installation with one of the following:

ollama run llava:7b
ollama run llava:13b
ollama run llava:34b

Let it download, it will grab several gigabytes worth of files even for the 7B parameters model.

Next, test it with any image you might have:

ollama run llava "Tell me what you see in this image: ./your_image.jpg"

... where ./your_image.jpg of course would be the path to your image. After few seconds (with my MacBook Pro M2) It will then comment you back something like

The bananas in the image appear to be ripe and fresh, with a vibrant yellow color indicative of good quality fruit.

Of course, instead of being polite and dry, you can prompt it however you want: "Roast this image", "Tell me joke about this image", whatever comes up to your mind.

Connect it with Zabbix

There are way too many ways for doing this to list them all here: create your own Ollama server and use an API, use a Python or JavaScript code snippet, use the command line tool we've used so far, send the data to Zabbix with agent, zabbix_sender, over Zabbix API, your requirements and imagination are the only real limits.

For this demo purpose, I merely 

  • created a new template Robotic Eyes and attached it with my personal MacBook Pro, as it has Zabbix agent running already and it has the Ollama app. 
  • added a new item to template which just reads a text file on my laptop
  • Ollama writes to that text file every time I want to scan another image.
     

The template

Here's my template in all its glory: Yes, one item. Yes, just the Zabbix agent type and its vfs.file.contents which then reads the contents of my text file. Done.

Template item

Image commentator

And this is what generates the text file

ollama run llava "Use one sentence to estimate the freshness of these bananas in this image: ./bananas.jpg" >~/robotic_eyes.txt

For now -- as this is just a simple demo -- I run this manually on my MacBook. Of course, in the real world it could and should be something more sophisticated: incron entry, some API-based application, something more.

The dashboard

I used the Header widget by InitMax for both the BA-NA-NA! title and showing the banana image itself.

Dashboard

And see, this is how it can comment my bananas. I think it's now safe for me to eat them.

Banana quality history

How do I show the image of the bananas?

I did snap a photo with my iPhone, transferred it to my Raspberry Pi, and did put it in a directory accessible by the nginx running the Zabbix web frontend show. Then, with the InitMAX Header widget, did this:

Header widget configuration

Manual file copying. Worst HTML the world has seen, as for example my <img> tag does not specify the size of the image or any alternate text. Shame on me. But, works for demo purposes.

Of course, the traditional Zabbix URL widget could also work, but I like trying out other widgets too, now that we have them.

Real world use case examples

Sure, this is just me going bananas, but how about the real world? What if this kind of thing would monitor ... well, it's 2025 and AI's are all around us, so this is happening already ... but ... what if your Zabbix could part of the chain monitoring

  • the quality of bananas in real life banana business
  • CCTV feed for burglars or other suspicious stuff
  • plants, flowers, whatever you might have in your garden
  • analyzing any custom image you want to throw at it

We are truly living in the future. Setting this demo up in just few commands and about the simplest Zabbix configuration ever, and we're actually making the computer to see and understand what's in the picture. All this for free and in open-source. Thank you everybody who makes this possible.

Add new comment

Restricted HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Buy me a coffee

Like these posts? Support the project and Buy me a coffee