Skip to content

Esp32: Add camera module #59

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
gigwegbe opened this issue Jan 11, 2022 · 19 comments
Closed

Esp32: Add camera module #59

gigwegbe opened this issue Jan 11, 2022 · 19 comments

Comments

@gigwegbe
Copy link

Hello @mocleiri,
I recently came across your awesome project and you are doing a great work, I have tried a couple of the example projects. I will want to replicate a similar project but this time with esp32. So far, I cannot find the camera module for esp32, I will appreciate if you can add camera module to your firmware. You can check out micropython-camera-driver. Thanks.

@gigwegbe gigwegbe changed the title Add camera module Esp32: Add camera module Jan 11, 2022
@mocleiri
Copy link
Owner

Thanks for filing this issue. I have a report from another user that they were able to successfully modify the firmware to include the micropython-camera-driver.

I'm hoping to get the steps from them and then can apply it.

I think the out of the box build is for esp32-cam-mb. I don't have that esp32 board setup but can add as part of this issue.

What board are you using?

@uraich
Copy link
Collaborator

uraich commented Jan 11, 2022

Hi @gigwegbe
Looks like I had the same idea as you do, and yesterday I managed to get a micropython firmware compiled for the esp32-cam board into which I integrated the ov2640 camera driver, This camera is installed on the esp32-cam..
Unfortunately it is quite tricky to get the driver running on micropython because it requires memory space in PSRAM to be reserved for its image buffers and micropython reserves all PSRAM for its heap. Therefore you must modify main.c of the esp32 micropyhon port to give some space to the driver. In addition standard esp-idf does not come with the camera driver, which must be added to the components folder. Finally you need the micropython interface to the driver.
As I said, I believe to have a working firmware now but I must first try if I can read 96x96 pixel gray scale images from the camera. As soon as I manage to this I will produce a write-up and publish it.
Which esp32 module and which camera are you using? Should you want to run tinyML on the esp32-cam, the I can upload the firmware binary to github and give you access.
@mocleiri: The esp32-cam-mb is just a mother board for the (slightly modified) esp32-cam. It only provides the USB to serial converter, which is missing on the esp32-cam. It allows easy flashing. However, there is a problem with the serial RTS and DTR lines, which the standard Unix serial driver activates. All tools you are using must deactivate RTS and DTR. (these are used to put the ESP32 into flash mode and to reset the board). This can easily be done on (the newest development version of) thonny, rshell and gtkterm. Standard ampy and minicom do not allow you to do this.

@uraich
Copy link
Collaborator

uraich commented Jan 12, 2022

For the moment I don't have much luck. When just including the camera driver into micropython everything looks fine
esp32-cam_ok
but when I add ulab and tflm I get:

esp32-cam_bad
I guess this needs some in-depth debugging!

@mocleiri
Copy link
Owner

@uraich

Do you have an esp prog board?

https://docs.espressif.com/projects/espressif-esp-iot-solution/en/latest/hw-reference/ESP-Prog_guide.html

It's an external jtag board that you can connect to an esp32 board.

You can use it with visual studio code to debug on the device.

I know the steps to get it and the stm32 st-link connection to work but haven't documented it yet.

Actually I know the more complicated steps of letting me run the part that talks to the board in windows while I debug on visual studio code within the windows subsystem for Linux.

In this case you see some existing error handler is catching and failing.

You may be able to just search for the message to find the handler and then log something extra .

@uraich
Copy link
Collaborator

uraich commented Jan 12, 2022

No, I don't have this board. I do have another board though, that should allow me to do JTAG debugging. Unfortunately I left it in my second home.
Nevertheless I made some progress:
96x96image
This is a 96x96 pixel image taken with the esp32-cam. However, it is jpeg and I must get this image as a raw gray scale image.
I took it with the WEB server I have, implementing a WEBcam.
Now I will have to write a small Python program allowing me to take raw gray scale images. The esp32 camera component seems to allow me to do that, but Mauro Riva's micropython interface does not give access to these features. This means that I will have to extent his code.
... and the tflm libraries are included as well.

@gigwegbe
Copy link
Author

gigwegbe commented Jan 13, 2022

@mocleiri,
Thanks for the swift feedback. I have a couple of esp32 boards.
Primarily, I use ESP32 3.5" TFT Touch(Capacitive) with Camera at the moment. However, I have other boards such as ESY-EYE and TTGO-Camera.
@uraich I will appreciate it if I can have access to your firmware, I want to experiment with some computer vision models on the esp32 board. Also, you can refer to the boards above for more details. Once again thanks for the awesome work.

@uraich
Copy link
Collaborator

uraich commented Jan 13, 2022

Personally I only have cheap esp32-cam boards and I use these for my tests. I guess however, that the firmware may also work on different esp32 boards with 4M flash and PSRAM. However, I am not sure. You will have to try.
Please take a look at my tinyML repository to see if you can do anything with it.
Today I have been able to acquire 96x96 pixel gray scale images and I will now check if the model can find out, if there is a person (me!) on them or not.
All this is highly experimental, still very,very crude, but may maybe you can do something with it?

@mocleiri
Copy link
Owner

@uraich the cmake flag to point at an external/out of tree component is:

set(EXTRA_COMPONENT_DIRS ../../components/)

reference: https://github.com/espressif/tflite-micro-esp-examples/blob/4770f61a79e7567bd09320d8cb2da4f4ff46ca50/examples/person_detection/CMakeLists.txt#L2

I'm looking into #9 to incorporate the accelerated esp tflm kernels and that is how the espressif repo is bringing in esp-camera for the C++ person detection example.

We may move the accelerated kernels up into tflm but at least to start with the esp-camera module could be resolved through the esp-camera component coming in from https://github.com/espressif/tflite-micro-esp-examples.

@mocleiri
Copy link
Owner

@gigwegbe I am happy to add additional board configs into the project.

All I need is a pointer at the board configuration you are currently using and then I can bring that into the project and adapt it to include the right modules, etc.

If you are able to use one of the generic board configs then that would also be useful to know for improving the documentation. i.e. a known to work boards list.

Please file a separate ticket. You can include all 3 boards in the same one.

That will get things setup so that once this issue resolves there will be a firmware automatically created for your particular board.

@uraich
Copy link
Collaborator

uraich commented Jan 15, 2022

@mocleiri : For the moment I have a working firmware on the esp32-cam with the camera driver included and I tried the person detection example from your repository. Here the result:
image
This looks pretty much like the printout from your README.md
Then I wrote a program that permanently takes images and passes them into the model. Here the results when I point into my office:
not_a_person
and here if I point the camera towards myself:

person
Looks quite reasonable, doesn't it? Now I will try to switch on a LED on esp32-cam when it detects a person.

@mocleiri
Copy link
Owner

@uraich those are amazing results!

Congratulation on getting it all working.

I think the easiest way to fold your changes back into this repo is to file a pull request.

I know you have micropython changes so just extract a patch of your changes and include as a file in the branch you make the pull request on.

$ cd micropython
$ git diff > ../micropython-changes.diff
$ cd ..
$ git add micropython-changes.diff

I'll try my best to incorporate them without needing to modify upstream micropython. However I will if that's what is needed.

@uraich
Copy link
Collaborator

uraich commented Jan 15, 2022

I have still improved the program a little: I now light the flash light, incorporated into the ESP32-cam, when a person is seen and I switch it off when no person is around. This works rather well. I had to use PWM to diminish the light intensity, which is extremely bright otherwise.
As I said, all this is still pretty crude but I will try to integrate it correctly into you repository to be able to issue a pull request. I will need some time for this though.
What I find really amazing is that all this runs on a machine that I bought for less than 8 Euros:

  • the CPU
  • the main board with an USB to serial converter
  • the ov2640 camera
  • and even shipping

@uraich
Copy link
Collaborator

uraich commented Jan 19, 2022

As desired, I created a pull request.
In order to integrate the esp32-cam driver, you need the Espressif esp32-camera component and the esp32-cam driver by Mauro Riva. I included Mauro's driver in the micropython-modules directory, and I had to modify the micropython.cmake file to add it. You must also change main.c in ports/esp32. The diff file is included. I created a new board file. (MICROLITE_ESP32_CAM) with all the modifications required and a link to esp32-camera. Please check the pull request and tell me what needs to be changed.
I also added a directory named esp32-cam to examples/person_detection
person_detection-cam.py permanently reads images from the camera and passes them to the model. If a person is detected, the flashlight is switched on (with low light intensity).
In imageTools you find the program readImage.py which reads a single 96x96 gray scale image and saves it under the name
"camImage.raw" on the esp32 flash. getImage.sh copies this file to the local PC folder (needs rshell to be installed). showGrayScaleImg.py img_name displays the images stored in img_name.

@mocleiri
Copy link
Owner

I made some adjustments to your pull request and have merged the combined work in e6e2a4b

I'm working on getting it to run via a github action.

Recent commits have gone into micropython which change how the malloc allocated heap works and they leave 50% of the available space for non micropython purposes.

I adjusted the micropython patch to force micropython into the same mode when SPRAM is being allocated using malloc. So we get a 2 MB heap and there is 2MB available for esp32-camera to allocate.

@mocleiri
Copy link
Owner

I got the github action working: https://github.com/mocleiri/tensorflow-micropython-examples/actions/runs/1767505151

I just downloaded and flashed my m5 timer camera and inference still works.

@uraich can you download the microlite-spiram-cam firmware from the above action and report back if it still works for you?

@gigwegbe I checked all 3 boards you posted and they all have 8 MB SPIRAM and 4MB flash use the ov2640 sensor. Can you try the above firmware and report if they work for you?

If it doesn't work I'm happy to add other board configs.

I hope there is enough info here on how to run it: https://github.com/mocleiri/tensorflow-micropython-examples/blob/main/examples/person_detection/README.md#running-on-an-esp32-with-a-camera

@uraich
Copy link
Collaborator

uraich commented Jan 31, 2022

@mocleiri : As requested I tried the new version and it works the same way as the version for which I made the pull request. Excellent! I downloaded the binaries but did not know how to flash them (and was too lazy to look it up). I therefore built the firmware from scratch, using the upgraded repository and following the instructions in build_esp32.yml.
There I saw that you put back commands like
rm -rf builds
with an s too much.
My accelerometers should arrive with the next week or two and then I will give those a try, starting with picking up or writing myself a driver for them and trying it out. I first want to have a simple program that allows me to acquire gestures and display them in a human-readable form.

@mocleiri
Copy link
Owner

Thanks again for your contributions and testing that I got it right.

I can never remember the flash options either. I have them on the README although they are so far down now: https://github.com/mocleiri/tensorflow-micropython-examples#flash-from-windows

I´ll correct those extra s´s.

@uraich
Copy link
Collaborator

uraich commented Jan 31, 2022

Thanks for the link. This shows how to do it from Windows but for Linux, just the serial port must be adapted.
I noted it for future use.

@mocleiri
Copy link
Owner

mocleiri commented Feb 3, 2022

I´m closing this as it has been integrated.

@gigwegbe please open new issues if there are any issues with your specific camera boards.

I´m happy to add additional board configs.

@mocleiri mocleiri closed this as completed Feb 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants