Streaming ESP32-CAM Images to Multiple Browsers via MQTT

Streaming ESP32-CAM Images to Multiple Browsers via MQTT

In this tutorial, you'll learn how to publish images from an ESP32-CAM board to multiple browsers using MQTT (Message Queuing Telemetry Transport). This setup will enable you to create a platform that functions similarly to a live video stream, viewable by unlimited users. We will also show how to transform this stream of JPEG images into a continuous video using Python and ffmpeg for real-time encoding, resulting in one-hour segmented videos, each compactly viewable in about 4 minutes. Ideal for time-lapse videos and surveillance, this setup also introduces AI-powered face recognition, further enhancing surveillance efficiency by pinpointing moments of human presence in the footage for easy reference and review.

This hands-on tutorial includes ready-to-use source code! Curious why we chose Lua? Take a look at the tutorial: Why Smart C Coders Use Lua.

This tutorial builds on what you learned in the tutorial Your First Xedge32 Project.

Prerequisites

This hands-on tutorial teaches you how to use a ready-to-run ESP32 CAM Lua script. Simply install Xedge32 on your ESP32 CAM board, and you'll be set to execute the Lua script. Xedge32 works with most ESP32 and ESP32-S3 CAM boards and offers a suite of protocols, including MQTT.

Before diving in, make sure you have completed the following prerequisite tutorials:

By building on the knowledge gained from these foundational tutorials, you'll be better equipped to follow along with this tutorial.

Publishing ESP32-CAM Images via MQTT

In the MQTT CAM code, our primary focus is publishing images without subscribing to other events. This publishing operation is managed by a timer event, which publishes images based on the intervals specified.

Setting Up the Timer

First, let's create a timer object. This timer will trigger the publishImage function at specific intervals.

1
timer = ba.timer(publishImage)

To interact with the ESP32 camera, initialize a camera object like so:

1
cam = esp32.cam(cfg)

The cfg parameter represents a configuration table. Important: make sure it matches the settings for your particular ESP32-CAM module. See the Lua CAM API for details.

Handling MQTT Connection Status

For monitoring MQTT connections, use the following callback function:

1
2
3
4
5
6
7
8
local function onstatus(type, code, status)
   if "mqtt" == type and "connect" == code and 0 == status.reasoncode then
      timer:set(300, false, true)  -- Activate timer every 300 milliseconds
      return true  -- Accept connection
   end
   timer:cancel()
   return true  -- Keep trying
end

The above function starts the timer when a successful MQTT connection is made. If the connection drops, it cancels the timer but will keep attempting to reconnect.

Image Publishing via Timer Callback

The core of the image publishing mechanism is the timer callback function, publishImage. This function captures an image using the camera object and publishes it via MQTT. The timer logic supports various timer types. Notably, this version operates as a Lua coroutine (akin to a thread). Within this coroutine, it continually loops and hibernates for the duration defined by the timer through coroutine.yield(true).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
function publishImage()
   local busy = false
   while true do
      if mqtt:status() < 2 and not busy then
         busy = true -- thread busy
         ba.thread.run(function()
            local image = cam:read()
            mqtt:publish(topic, image)
            busy = false -- no longer running
         end)
      end
      coroutine.yield(true)  -- sleep
   end
end

The above function maintains flow control by not publishing an image if two images already populate the MQTT client's send queue. The cam:read function can be time-consuming -- not in human time, but in terms of microcontroller operations. As such, we offload the task of reading from the CAM object onto a separate thread. While this step isn't strictly necessary, it enhances the performance of applications juggling multiple operations alongside reading from the CAM. For a deeper dive into the threading intricacies, you are encouraged to refer to the Barracuda App Server’s documentation on threading.

The following shows the complete MQTT CAM code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
local topic = "xedge32/espcam/USA/92629"
local broker = "broker.hivemq.com"
 
-- Settings for 'FREENOVE ESP32-S3 WROOM' CAM board
local cfg={
   d0=11, d1=9, d2=8, d3=10, d4=12, d5=18, d6=17, d7=16,
   xclk=15, pclk=13, vsync=6, href=7, sda=4, scl=5, pwdn=-1,
   reset=-1, freq="20000000", frame="HD"
}
 
-- Open the cam
local cam,err=esp32.cam(cfg)
assert(cam, err) -- Throws error if 'cfg' incorrect
 
local timer -- Timer object; set below.
 
-- MQTT connect/disconnect callback
local function onstatus(type,code,status)
   -- If connecting to broker succeeded
   if "mqtt" == type and "connect" == code and 0 == status.reasoncode then
      timer:set(300,false,true) -- Activate timer every 300 milliseconds
      trace"Connected"
      return true -- Accept connection
   end
   timer:cancel()
   trace("Disconnect or connect failed",type,code)
   return true -- Keep trying
end
 
-- Create MQTT client
local mqtt=require("mqttc").create(broker,onstatus)
 
-- Timer coroutine function activated every 300 millisecond
function publishImage()
   local busy=false
   while true do
      --trace(mqtt:status(), busy)
      -- Flow control: If less than 2 queued MQTT messages
      if mqtt:status() < 2 and not busy then
         busy=true
         ba.thread.run(function()
            local image,err=cam:read()
            if image then
               mqtt:publish(topic,image)
            else
               trace("cam:read()",err)
            end
            busy=false
         end)
      end
      coroutine.yield(true) -- sleep
   end
end
timer = ba.timer(publishImage)

While we have already covered the majority of the program's functionality, there are a few aspects we haven't touched upon yet:

  • Topic and Broker Configuration:

    • local topic = "xedge32/espcam/USA/92629": Sets the MQTT topic where the images will be published. Change this topic to your address.
    • local broker = "broker.hivemq.com": Specifies the MQTT broker's address. The public HiveMQ broker is used in this example.
  • ESP32 Camera Configuration (cfg): This block sets up the specific pin configurations and settings for your ESP32 CAM board. Replace these settings with those appropriate for your hardware.

  • Creating the MQTT Client: The MQTT client is created with the require("mqttc").create(broker, onstatus) function, passing in the broker address and the onstatus callback.

  • Creating the Timer Object for publishImage: The timer is created by calling ba.timer and passing in the publishImage callback, which will be activated at regular intervals. This is the mechanism that continually captures and publishes images.

Subscribing to CAM Images with a JavaScript-Powered HTML Client

To visualize the images published by the ESP32 camera, you can use an HTML client. The following client will subscribe to the same MQTT topic to which the camera is publishing images. The client runs purely in your web browser and does not require any server setup.

The entire code for the HTML client is shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
<!DOCTYPE html>
<html lang="en">
<head>
  <title>Cam Images Over MQTT</title>
  <script src="https://cdnjs.cloudflare.com/ajax/libs/mqtt/5.0.0-beta.3/mqtt.min.js"></script>
<script>
 
const topic="xedge32/espcam/USA/92629";
const broker="broker.hivemq.com";
 
window.addEventListener("load", (event) => {
    let img = document.getElementById("image");
    let msg = document.getElementById("msg");
    let frameCounter=0;
    const options = {
        clean: true,
        connectTimeout: 4000,
        port: 8884 // Secure websocket port
    };
    const client = mqtt.connect("mqtts://"+broker+"/mqtt",options);
 
    client.on('connect', function () {
        msg.textContent="Connected; Waiting for images...";
        client.subscribe(topic);
    });
     
    client.on("message", (topic, message) => {
        const blob = new Blob([message], { type: 'image/jpeg' });
        img.src = URL.createObjectURL(blob);
        frameCounter++;
        msg.textContent = `Frames: ${frameCounter}`;
    });
});
</script>
</head>
<body>
    <h2>Cam Images Over MQTT</h2>
   <div id="image-container">
    <img id="image"/>
  </div>
    <p id="msg">Connecting...</p>
</body>
</html>

MQTT JavaScript client

At the top of the HTML file, the MQTT JavaScript library is imported to enable MQTT functionalities. This is found within the <script src=".......mqtt.min.js"></script> line.

Body Layout

The HTML body contains a <div> element with an id of "image-container" that will house the incoming images, and a <p> element with an id of "msg" that serves as a placeholder for status messages.

MQTT Configuration

In the JavaScript section, two constants topic and broker are defined. These must correspond to the topic and broker configurations in your mqttcam.xlua file.

Connecting to MQTT Broker

The client initiates an MQTT connection to the specified broker using the mqtt.connect() method. It uses a secure websocket port 8884 for this connection.

Handling Incoming Messages

Upon a successful connection, the client subscribes to the topic. Any incoming message on this topic is expected to be a binary JPEG image. The message is converted into a Blob and displayed as the source for the image element.

Frame Counter

A frameCounter variable keeps count of the incoming frames (or images) and displays this count as a text message below the displayed image.

By having this HTML file open in a web browser, you'll be able to visualize in real-time the images that are being published to the specified MQTT topic.

Preparing the Code

Step 1: Prepare the Lua Script as Follows:

  1. As explained in the tutorial Your First Xedge32 Project, when the Xedge32 powered ESP32 is running, use a browser and navigate to the Xedge IDE.
  2. Create a new Xedge app called "cam" and LSP enable the app.
  3. Expand the cam app now visible in the left pane tree view.
  4. Right-click the cam app and click New File in the context menu.
  5. Type camtest.lsp and click Enter.
  6. Open the camtest.lsp file at GitHub and click the copy raw file button.
  7. Go to the Xedge IDE browser window and paste the content into the camtest.lsp file.
  8. Important: Adjust the cfg settings in camtest.lsp to match your specific ESP32 CAM board. See the Lua CAM API for details.
  9. Click Save and then Click Open to test your cam settings. Make sure you see the image generated by the LSP script before proceeding.
  10. Right-click the cam app and click New File in the context menu.
  11. Type mqttcam.xlua and click Enter.
  12. Open the mqttcam.xlua file at GitHub and click the copy raw file button.
  13. Go to the Xedge IDE browser window and paste the content into the mqttcam.xlua file.
  14. Using the Xedge editor, update the topic variable xedge32/espcam/USA/92629 in the Lua script to your desired MQTT topic.
  15. Important: Copy the cfg settings from camtest.lsp and replace the cfg settings in mqttcam.xlua with the settings you tested in step 9.
  16. Click the Save & Run button to save and start the example.

Step 2: Prepare the HTML/JS File as follows:

  1. Download mqttcam.html, open the file in any editor, and ensure the topic in the HTML file matches the topic you set in the Lua script.
  2. Save the mqttcam.html file.
  3. Open mqttcam.html: Double-click the mqttcam.html file or drag and drop it into your browser. Note: this file is designed to be opened directly from the file system. You do not need a web server to host this file.
  4. Observe the Output: The webpage will display the images being published by the ESP32 CAM. The number of frames received will be displayed below the image.

Potential Issues with ESP32 CAM Boards and Solutions

ESP32 CAM boards are widely recognized for their versatility and affordability. However, they're not without their challenges. One of the significant issues users might face with the ESP32 CAM boards is interference between the camera read operation and the built-in WiFi module. Let's delve into the specifics:

Problem: Interference and WiFi Degradation

When the ESP32 CAM board is in operation, especially during the camera's read operation, it can generate noise. This noise interferes with the built-in WiFi, which results in:

  • Reduced Range: The distance over which the WiFi can effectively transmit and receive data can be notably decreased.

  • Decreased Throughput: The speed and efficiency at which data is transmitted over the WiFi network can be considerably hampered.

Solutions

To combat these issues, consider the following solutions:

  1. Use a CAM Board with an External Antenna: Several ESP32 CAM boards come equipped with or support the use of an external antenna. By using such a board and connecting an external antenna, you can boost the WiFi signal strength and range, mitigating some of the interference caused by the camera operations.

  2. Integrate the W5500 Ethernet Chip: If your application demands consistent and robust data transmission, consider incorporating the W5500 Ethernet chip. By utilizing Ethernet over WiFi, you are effectively sidestepping the interference issues associated with WiFi on the ESP32 CAM board. Xedge32 is equipped with integrated Ethernet drivers. When paired with hardware that supports it, like the W5500 chip, it can facilitate smooth and interference-free data transfer, ensuring that your application remains stable and efficient.

In conclusion, while the ESP32 CAM board is an excellent tool for a myriad of applications, it's crucial to be aware of its limitations and know how to circumvent them to ensure optimal performance.

Is this MQTT cam concept useless, or does it have practical applications? We've sparked a heated discussion. We'd love to hear your thoughts and insights. Join the conversation and give your input!

References

Converting MQTT JPEG Image Stream to Continuous Video

Turning a published stream of images into a regular video is straightforward with the right tools. You can efficiently create continuous video streams by leveraging a small Python script that subscribes to the MQTT image topic, combined with ffmpeg for real-time video encoding. This method has been successfully tested on a Raspberry Pi, but it should run on various host platforms, including Windows and Mac. The instructions here are tailored for Linux.

Before you begin, ensure the necessary packages are installed on your system. You can do this by running:

1
2
sudo apt-get install python3-pip ffmpeg
pip install paho-mqtt

With the above dependencies in place, you're ready to execute the Python script. Make sure to adjust the image topic name to match the one you are using. Setting up a local MQTT broker is recommended for optimal performance, especially in a network with multiple CAM devices. The script provided below is designed to be easily adaptable for such setups:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
import paho.mqtt.client as mqtt
import subprocess
import time
import datetime
 
# MQTT settings
MQTT_BROKER = "broker.hivemq.com"
MQTT_TOPIC = "xedge32/espcam/USA/92629"
MQTT_PORT = 1883
 
# Function to start a new ffmpeg process
def start_ffmpeg_process():
    filename = datetime.datetime.now().strftime("vid_%Y%m%d_%H%M.mp4")
    process = subprocess.Popen(
        ["ffmpeg", "-i", "-", "-vf", "format=yuv420p", filename],
        stdin=subprocess.PIPE
    )
    return process
 
ffmpeg_process = start_ffmpeg_process()
start_time = time.time()
 
def on_message(client, userdata, message):
    global ffmpeg_process, start_time
    try:
        ffmpeg_process.stdin.write(message.payload)
    except Exception as e:
        print(f"Error sending image to ffmpeg: {e}")
 
    if time.time() - start_time >= 3600:
        ffmpeg_process.stdin.close()
        ffmpeg_process.terminate()
        ffmpeg_process.wait()
        ffmpeg_process = start_ffmpeg_process()
        start_time = time.time()
 
def on_connect(client, userdata, flags, rc):
    if rc == 0:
        print("Connected successfully.")
        client.subscribe(MQTT_TOPIC)
    else:
        print(f"Failed to connect, return code {rc}")
 
def on_disconnect(client, userdata, rc):
    print("Disconnected from MQTT broker.")
 
client = mqtt.Client()
client.on_message = on_message
client.on_connect = on_connect
client.on_disconnect = on_disconnect
client.connect(MQTT_BROKER, MQTT_PORT, 60)
client.loop_forever(retry_first_connection=True)

The above Python script converts a continuous stream of JPEG images received from the MQTT image topic into video files. Each video file represents one hour of footage. Note that the ongoing video segment is not viewable until the hour elapses and the video file is finalized.

Python Script Details:

  • The start_ffmpeg_process function initializes an ffmpeg subprocess that encodes incoming images into a video file.
  • The on_message callback writes each received JPEG image to the ffmpeg subprocess.
  • After one hour, the script closes the current ffmpeg process, finalizing the video file. It then starts a new ffmpeg process for the next segment.
  • This cycle repeats, creating a series of one-hour video files.
  • The video frame rate is set at 30 frames per second. Thus, assuming a capture rate of two images per second for an hour of image capturing, the total number of frames would be 3600 seconds x 2 images per second = 7200 images. Since these images are played back at 30 frames per second, the resulting video length will be 7200 images / 30 frames per second = 240 seconds, or 4 minutes long.

Run the script in the background as follows:

1
nohup python img2vid.py &

Video Surveillance and Face Detection

Computer Vision is a field of artificial intelligence that enables machines to derive meaningful information from digital images. The OpenCV library enables easy image recognition without requiring a detailed understanding of AI and Computer Vision. The library simplifies implementing object detection and recognition, making it an excellent option for applications ranging from security systems to factory supervision.

OpenCV provides pre-trained models for various classification tasks, including face detection. The following script uses haarcascade_frontalface_default.xml, a model trained at detecting frontal faces, which makes it particularly suitable for surveillance applications. In the final script, we will integrate this face detection functionality into our video script, enhancing the video script's ability to recognize and record human presence, thus offering a more complete surveillance solution.

Execute the following command to install OpenCV before running the script.

1
pip install opencv-python

Face detection script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
import paho.mqtt.client as mqtt
import subprocess
import time
import datetime
import cv2
import numpy as np
 
# MQTT settings
MQTT_BROKER = "broker.hivemq.com"
MQTT_TOPIC = "xedge32/espcam/USA/92629"
MQTT_PORT = 1883
 
# Load the Haar Cascade for face detection
face_classifier = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
 
def detect_human(image):
    # Convert image to grayscale
    gray=cv2.cvtColor(cv2.imdecode(np.frombuffer(image, np.uint8),cv2.IMREAD_COLOR),cv2.COLOR_BGR2GRAY)
    # Detect faces in the image
    faces = face_classifier.detectMultiScale(gray, 1.3, 5)
    # Check if any faces are detected
    if len(faces) > 0:
        print("Human detected")
        # Save image with timestamp if a face is detected
        timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
        filename = f"human_detected_{timestamp}.jpg"
        with open(filename, 'wb') as file:
            file.write(image)
 
def on_message(client, userdata, message):
    detect_human(message.payload)
 
def on_connect(client, userdata, flags, rc):
    if rc == 0:
        print("Connected successfully.")
        client.subscribe(MQTT_TOPIC)
    else:
        print(f"Failed to connect, return code {rc}")
 
def on_disconnect(client, userdata, rc):
    print("Disconnected from MQTT broker.")
 
client = mqtt.Client()
client.on_message = on_message
client.on_connect = on_connect
client.on_disconnect = on_disconnect
client.connect(MQTT_BROKER, MQTT_PORT, 60)
client.loop_forever(retry_first_connection=True)

The above script is designed to process a continuous stream of images sent over MQTT, detect human faces in each image using OpenCV, and save any images with detected faces. When messages (images) are received on the image topic, the on_message function triggers the detect_human function. In detect_human, each image is first converted from a JPEG format (as received in the MQTT payload) to a NumPy array using OpenCV's imdecode function. This is necessary because OpenCV processes images as NumPy arrays. The image is then converted to grayscale, a common preprocessing step in many computer vision tasks, including face detection. Grayscale simplifies the detection process by reducing the amount of data to be processed (color information is not usually necessary for detecting faces). The haarcascade_frontalface_default.xml classifier, a part of OpenCV's library, is specifically trained to detect human faces. It works by scanning the grayscale image for patterns or features typical of human faces. If any faces are detected, the script saves the original image (in color) with a timestamp in the filename, indicating when the face was detected.

Merging Video and Face Detection for Enhanced Surveillance

Our consolidated script unites video streaming with face detection capabilities, simplifying the operation of a surveillance system. The script saves images that capture human presence, each marked with a timestamp. This design allows users to quickly locate and view specific segments of video corresponding to the timestamped images, providing a quick reference to moments of interest.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
import paho.mqtt.client as mqtt
import subprocess
import time
import datetime
import cv2
import numpy as np
  
# MQTT settings
MQTT_BROKER = "broker.hivemq.com"
MQTT_TOPIC = "xedge32/espcam/USA/92629"
MQTT_PORT = 1883
  
# Function to start a new ffmpeg process
def start_ffmpeg_process():
    filename = datetime.datetime.now().strftime("vid_%Y%m%d_%H%M.mp4")
    process = subprocess.Popen(
        ["ffmpeg", "-i", "-", "-vf", "format=yuv420p", filename],
        stdin=subprocess.PIPE
    )
    return process
  
ffmpeg_process = start_ffmpeg_process()
start_time = time.time()
 
# Load the Haar Cascade for face detection
face_classifier = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml")
  
def detect_human(image):
    # Convert image to grayscale
    gray=cv2.cvtColor(cv2.imdecode(np.frombuffer(image, np.uint8),cv2.IMREAD_COLOR),cv2.COLOR_BGR2GRAY)
    # Detect faces in the image
    faces = face_classifier.detectMultiScale(gray, 1.3, 5)
    # Check if any faces are detected
    if len(faces) > 0:
        print("Human detected")
        # Save image with timestamp if a face is detected
        timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
        filename = f"human_detected_{timestamp}.jpg"
        with open(filename, 'wb') as file:
            file.write(image)
  
def on_message(client, userdata, message):
    detect_human(message.payload)
    global ffmpeg_process, start_time
    try:
        ffmpeg_process.stdin.write(message.payload)
    except Exception as e:
        print(f"Error sending image to ffmpeg: {e}")
    if time.time() - start_time >= 3600:
        ffmpeg_process.stdin.close()
        ffmpeg_process.terminate()
        ffmpeg_process.wait()
        ffmpeg_process = start_ffmpeg_process()
        start_time = time.time()
  
def on_connect(client, userdata, flags, rc):
    if rc == 0:
        print("Connected successfully.")
        client.subscribe(MQTT_TOPIC)
    else:
        print(f"Failed to connect, return code {rc}")
  
def on_disconnect(client, userdata, rc):
    print("Disconnected from MQTT broker.")
 
client = mqtt.Client()
client.on_message = on_message
client.on_connect = on_connect
client.on_disconnect = on_disconnect
client.connect(MQTT_BROKER, MQTT_PORT, 60)
client.loop_forever(retry_first_connection=True)

Keep Learning:

Congratulations on completing this tutorial! But the learning doesn't stop here. Delve further into the world of embedded systems with our extensive collection of Xedge32 tutorials.

Posted in Xedge32

Engineers: We're Here to Support Your Vision:

Your time is valuable. Let our expert consulting services streamline your networking, security, and device management tasks. And for those moments of DIY inspiration, our tutorials offer unmatched guidance. Partner with Real Time Logic and witness the seamless blend of expertise and empowerment.



OPC-UA

OPC-UA Client & Server

An easy to use OPC UA stack that enables bridging of OPC-UA enabled industrial products with cloud services, IT, and HTML5 user interfaces.

Edge Controller

Edge Controller

Use our user programmable Edge-Controller as a tool to accelerate development of the next generation industrial edge products and to facilitate rapid IoT and IIoT development.

On-Premises IoT

On-Premises IoT Platform

Learn how to use the Barracuda App Server as your On-Premises IoT Foundation.

Embedded Web Server

Barracuda Embedded Web Server

The compact Web Server C library is included in the Barracuda App Server protocol suite but can also be used standalone.

WebSocket Server

Microcontroller Friendly

The tiny Minnow Server enables modern web server user interfaces to be used as the graphical front end for tiny microcontrollers. Make sure to check out the reference design and the Minnow Server design guide.

WebDAV Server

Network File System

Why use FTP when you can use your device as a secure network drive.

HTTP Client

Secure HTTP Client Library

PikeHTTP is a compact and secure HTTP client C library that greatly simplifies the design of HTTP/REST style apps in C or C++.

WebSocket Client

Microcontroller Friendly

The embedded WebSocket C library lets developers design tiny and secure IoT applications based on the WebSocket protocol.

SMTP Client

Secure Embedded SMTP Library

Send alarms and other notifications from any microcontroller powered product.

Crypto Library

RayCrypto C Library

The RayCrypto engine is an extremely small and fast embedded crypto library designed specifically for embedded resource-constrained devices.

Embedded PKI Service

Automatic SSL Certificate Management for Devices

Real Time Logic's SharkTrust™ service is an automatic Public Key Infrastructure (PKI) solution for products containing an Embedded Web Server.

Modbus

Modbus TCP client

The Modbus client enables bridging of Modbus enabled industrial products with modern IoT devices and HTML5 powered HMIs.


OSZAR »