I'm an MEng student majoring in Electrical Computer Engineering at Cornell University. This is a record of all the labs I have done for ECE 5160, Fast Robots.
Lab 1 was split into two parts. Lab 1A focused on getting comfortable with the Artemis board and the Arduino IDE (uploads, LED, serial, temperature sensor, and microphone). Lab 1B focused on building a BLE pipeline between a Jupyter notebook (Python) and the Artemis (Arduino), so I could send commands wirelessly and receive timestamped sensor/debug output.
I installed the Arduino IDE and the SparkFun Apollo3 board support package (via the additional boards manager JSON). After selecting the RedBoard Artemis Nano board definition, I attempted to upload a simple sketch. My first issue was that the bootloader would not connect, so the board initially failed to burn/upload. After reseating the USB cable/connector and retrying the upload (and confirming the correct port), the board eventually uploaded successfully and I continued with the examples.
Note: The Artemis is a 3V logic board, so inputs should be kept at 3V-level signals.
I connected the Artemis to my laptop and selected the correct board (RedBoard Artemis Nano) and the correct serial port. The main problem I ran into here was the upload failing because the bootloader did not connect at first. Once the port and physical connection were stable, uploads consistently worked.
I uploaded the Blink example and verified the onboard LED toggled at the expected rate. This confirmed that the board could compile and accept uploads reliably after fixing the bootloader/connection issue.
I ran the Apollo3 serial example and used the Serial Monitor to send input to the board and read back output. This verified that USB serial communication worked, and it also helped me confirm the correct baud rate settings when output looked incorrect.
I ran the temperature sensor example and observed the readings changing when I warmed the chip (touching or blowing on it). The change was not instant, which makes sense because the chip temperature takes time to rise and settle.
I ran the PDM microphone output example and verified that the measured output changed when I spoke or whistled near the board. This confirmed that the onboard microphone pipeline was functional.
For the additional task, I combined microphone input with serial output so the program could detect a few target frequency bands and print the corresponding note name over serial. The key idea was to estimate a dominant frequency from the microphone samples and then map that frequency to one of three note labels (within a tolerance). The three target frequencies I used were 440.0f (A4), 523.25f (C5), and 659.25f (E5) I essentially got the loudest frequency to reduce noise as much as possible.
I set up a Python environment for the BLE codebase and used JupyterLab to run commands from my laptop to the Artemis. On the board side, I installed the ArduinoBLE library and flashed the provided BLE sketch so the Artemis could advertise a BLE service and exchange command/data strings with my laptop.
After flashing the BLE sketch, the board printed its MAC address over serial. I copied that MAC address into
connections.yaml so the Python notebook connected to the correct device instead of scanning randomly.
I generated a fresh UUID in a Jupyter cell and updated both the Arduino BLE service UUID and the Python config UUID to avoid accidentally connecting to a classmate’s board. This ensured my laptop only recognized my service.
I checked that the characteristic UUIDs used in the Arduino sketch were consistent with the UUIDs referenced on the Python side so the notebook could write commands and subscribe to notifications correctly.
The notebook sends commands using an integer command ID. I confirmed that the command IDs in Python matched the Arduino enum,
so each command executed the intended case in the Arduino handle_command() switch statement.
After making changes to UUIDs/commands, I reflashed the Arduino sketch and then ran through the demo notebook cells in order. This confirmed the board could connect, accept command strings, and send responses back to the laptop.
I sent a string payload from Python using the ECHO command. On the Artemis side, the command handler extracted the string from the robot command buffer and wrote back a response string through the TX string characteristic. On the laptop side, I read the reply and confirmed the response contained the original text with an added prefix.
For SEND_THREE_FLOATS, I formatted the command in Python by packing three numeric values into the command string using the
delimiter format expected by the codebase (values separated by |). After sending the command over BLE, the Artemis received the
string and handle_command() parsed the next three values as floats. I printed the parsed floats over serial to verify they matched
the values sent from the laptop. This confirmed that multi-value payloads could be transmitted reliably and decoded with the correct types.
I added/used a GET_TIME_MILLIS command so the Artemis could respond with its current millisecond timer.
The robot wrote the time value into a string message and sent it back via the TX string characteristic.
I set up a Python notification handler subscribed to the RX/TX string characteristic. Each time the Artemis published a string, the callback ran and parsed out the timestamp portion, then appended it to a list for later analysis.
I ran a loop that repeatedly requested/sent timestamps for a few seconds. On the laptop, the notification handler recorded arrival times
and parsed timestamps. I computed the effective message rate as:
(number of received messages) / (total collection time).
Effective transfer rate: I estimated the data rate by multiplying the message rate by the approximate payload size per message.
Since BLE has overhead and string formatting adds extra characters, the true throughput was lower than “ideal” raw bytes.
Result: 239 messages → Data Transfer Rate: 47.80 (approx.)
Instead of transmitting each sample immediately, I stored timestamps in a fixed-size array on the Artemis. Once the array was filled
(or a stopping condition was met), I used a SEND_TIME_DATA command to loop over the stored values and transmit them as a batch.
On the Python side, I appended received entries into a list and checked that the counts matched.
On the Arduino side, the final 1000 signified that 1000 timestamps were recorded. This was a way of debugging for me. The same was done for Task 7 below.
I created a second array the same length as the timestamp array to store temperature readings so each index corresponded to the same sample time.
When I ran GET_TEMP_READINGS, the Artemis looped through both arrays and sent paired entries (timestamp + temperature).
The Python notification handler parsed each message and populated two lists.
Streaming method: easier for live debugging because data shows up immediately, but frequent notifications can bottleneck due to BLE overhead
and Python-side processing.
Buffered method: supports faster sampling on the robot (because sampling is local), but requires careful memory management and an extra “dump data”
step to transmit the stored values.
How quickly can buffered sampling run? It is mainly limited by the sensor read + array write on the Artemis, not by BLE. BLE is only involved when dumping data afterward.
Memory estimate: The Artemis has 384 kB RAM. If timestamps are stored as 32-bit values (~4 bytes each) and temperatures are 32-bit floats (~4 bytes each),
then each paired sample costs ~8 bytes total. In theory, 384,000 / 8 ≈ 48,000 samples could fit if nothing else used RAM, but realistically the program needs
stack/other buffers, so the safe number is lower. In my implementation, using arrays of length 1000 only uses ~8000 bytes for the paired data, which is comfortably within RAM.
I also experimented with performance by timing how long it took to send a command and receive a reply, comparing short replies and larger replies. In general, many small packets introduce proportionally more overhead, while larger replies can improve effective throughput (up to the BLE write/notify limits). I also tested reliability by increasing publish rate and checking whether the laptop missed messages.
Plot(s): Data rate / RTT vs payload size
In essence, each packet needed some amount of overhead to be transmitted. For smaller packets, they needed a lot of overhead, since it takes more packet tranmissions and more overhead to transmit a larger message. Though not tested, I assume the relationship between the number of bytes and the data rates to be effectively linear.
I also tested reliability by increasing publish rate and checking whether the laptop missed messages. I tested this by basically sending 500 integers from 0-499 and seeing if everything was received. Since nothing was missed, and all of it was outputted from 0-499, I can confidently say that the computer was able to receive everything.
Overall, Lab 1A was mostly about getting a reliable “compile → upload → verify” workflow with the Artemis board. The main issue I hit early on was getting the board to burn at all because the bootloader would not connect. Once that was resolved, the example sketches were straightforward, and each one served as a quick test of a different subsystem (LED, serial I/O, temperature sensor, microphone).
Lab 1B was more challenging to debug because the BLE pipeline adds multiple moving parts: Python-side command formatting, UUID/config matching, and
Arduino-side command parsing. A key debugging lesson was being disciplined about variable scope, especially for data collection tasks.
Arrays that store timestamps and temperatures need to persist across commands, so they must be global (or otherwise long-lived) rather than
declared inside handle_command(), where they would reset each time a command is processed. Similarly, state variables used to coordinate multi-step
behaviors (like “start collecting” vs “dump the buffer”) need to be stored in a way that survives between BLE command events.
Once the scope issues were fixed, the system became a practical wireless debugging tool: I could send structured commands from a notebook, receive responses through notifications, and log data into Python lists for plotting and analysis. This BLE framework will be useful in later labs where USB serial isn’t convenient during motion or sensor experiments.
In this lab, I connected the ICM-20948 IMU to the Artemis using the Qwiic connectors, verified the vendor example, and then built a pipeline to (1) compute pitch/roll from the accelerometer, (2) analyze accelerometer noise using an FFT, (3) apply a low-pass filter to reduce vibration/noise, (4) compute angles by integrating gyroscope rates (and observe drift), and (5) fuse both sensors using a complementary filter for a stable estimate.
I connected the ICM-20948 breakout to the Artemis using the Qwiic (I2C) connectors. This provides SDA/SCL plus power/ground with a keyed cable, which is reliable and avoids wiring mistakes.
I installed the SparkFun 9DOF IMU Breakout_ICM 20948_Arduino Library and ran the Example1_Basics sketch.
The AD0_VAL definition controls the least-significant bit of the IMU’s I2C address (it reflects the state of the AD0 pin/jumper).
On the SparkFun Qwiic breakout, the default is typically AD0=1 (often corresponding to address 0x69), and closing the ADR jumper sets
it to AD0=0 (often address 0x68). The correct value is whichever one matches the hardware jumper configuration.
When I rotated or flipped the board, the accelerometer axes changed in a way consistent with gravity re-projecting onto the sensor axes. When the board was roughly flat and still, the Z-axis stayed near ~1g (≈1000 mg), while X/Y stayed near ~0 (with noise). The gyroscope measured angular velocity (deg/s), so when the board was not rotating it hovered around 0, and when I rotated it I saw spikes whose sign depended on rotation direction.
Using the class equations, I computed pitch and roll from accelerometer readings (units can be mg since ratios cancel). A common stable form is:
// ax, ay, az in mg (or m/s^2), output in degrees
roll_acc = atan2( ay, az ) * 180/pi
pitch_acc = atan2( -ax, sqrt(ay*ay + az*az) ) * 180/pi
I tested known orientations using the table surface/edges as guides. Below are my console outputs:
0° reference: Pitch ≈ 0.573°, Roll ≈ −0.437°
+90° roll: Roll ≈ 89.320°, Pitch ≈ 0.794°
−90° roll: Roll ≈ −88.841°, Pitch ≈ 1.876°
+90° pitch: Pitch ≈ 89.797° (roll becomes unstable near ±90° pitch)
−90° pitch: Pitch ≈ −89.021° (roll becomes unstable near ±90° pitch)
The roll readings were very accurate at ±90° roll. Pitch was also accurate at ±90° pitch. However, when pitch is near ±90°, the gravity vector aligns closely with the X-axis, which makes roll poorly conditioned. It may have also been because I couldn't perfectly put it vertical or horizontal. Slight human errors inevitably exist. That’s why the printed roll at +90° pitch can look very wrong even though pitch is correct.
I captured and transmitted a time-stamped IMU log using the format:
t_us, ax_mg, ay_mg, az_mg, gx_dps, gy_dps, gz_dps.
In my quiet run, I captured 663 samples over ~5.29 s, for an effective sampling frequency of
~125.08 Hz (target IMU period was 5000 µs → ~200 Hz, but overhead reduced the achieved rate).
In quiet conditions, the accelerometer Z-axis stays near ~1g (~1000 mg) with small noise, while X/Y hover near 0. The gyroscope axes hover near 0 deg/s with noise; any bumps/hand motion show up as spikes.
The accelerometer-derived pitch/roll estimates are fairly noisy because accelerometers measure both gravity and linear accelerations/vibrations. Even when still, micro-vibrations and sensor noise cause jitter in the computed angles.
To reduce bias and small scaling error, I applied a simple two-parameter calibration on pitch:
pitch_calibrated = a * pitch_raw + b
From my calibration run, the fitted parameters were: a = 1.0150104375747704, b = −0.28927677393249907. This slightly increases the scale and shifts the offset so that the “flat” condition is closer to the expected value.
To choose a low-pass cutoff frequency, I computed the FFT of the accelerometer-based pitch signal. The spectrum was broadly noisy rather than dominated by a single sharp tone, meaning much of the jitter is higher-frequency noise rather than slow attitude motion. Since the it changes relatively slowly compared to vibration, a low cutoff is reasonable, hence why I used 5 Hz as a cutoff.
Based on the spectrum and the need to preserve only slow orientation changes, I used a cutoff of fc ≈ 5 Hz for the low-pass filter.
I implemented a simple first-order IIR low-pass filter:
// dt in seconds, fc in Hz
alpha = exp(-2*pi*fc*dt)
y[k] = alpha*y[k-1] + (1-alpha)*x[k]
With fc=5 Hz, the filtered pitch angle is noticeably smoother than the raw pitch, while still tracking the slow trend. If fc is too low, the output lags real motion; if it is too high, too much vibration remains.
I computed pitch/roll/yaw by integrating gyroscope rates over time:
// omega in deg/s
theta[k] = theta[k-1] + omega[k]*dt
The key issue is drift: even a small constant bias (e.g., 0.2 deg/s) accumulates into degrees of error after a few seconds. In my data, the integrated gyro angles show a clear slow drift despite the board being mostly stationary.
To combine the short-term accuracy of the gyro with the long-term stability of the accelerometer, I used a complementary filter:
// gyro_pred = (theta_prev + omega*dt)
// acc_term = theta_acc_LPF
theta = alpha*gyro_pred + (1-alpha)*acc_term
The results show the expected behavior: the gyro-integrated curve drifts, the accelerometer LPF curve is noisy (especially under impulse/vibration), and the complementary output stays stable without long-term drift while still reacting smoothly.
I repeated the same collection/analysis while inducing vibration (placing the IMU on the table and gently striking the table). Compared to quiet conditions, the accelerometer-derived angles become much noisier because vibrations add high-frequency components. This is exactly why the LPF + complementary filter are useful: the LPF suppresses high-frequency accelerometer noise, and the complementary filter prevents gyro drift while rejecting quick accelerometer spikes.
Below are the vibration plots (same pipeline as quiet):
The IMU period target was 5000 µs (≈200 Hz), but the measured effective rates were lower in practice (on the order of ~125–133 Hz in my runs). The main reasons are loop overhead (I2C transactions, floating-point math, and any remaining debug/communication overhead). To maximize sampling speed, I avoided delays and did not block waiting for IMU data; instead, the loop checks whether data is ready and stores it only when available. Though I tried to get it to as close as 5 seconds as possible, there were some inevitable delays, thus making it offset by 0.2-3 seconds.
In this lab, I integrated two VL53L1X Time-of-Flight (ToF) distance sensors with the Artemis (via the Qwiic I2C bus),
and built a data pipeline to (1) run both sensors at the same time, (2) avoid blocking the main loop by checking
checkForDataReady(), and (3) record time-stamped distance and time-stamped IMU data for a fixed
duration and then transmit the results over Bluetooth (BLE) to my laptop for plotting in Python.
The VL53L1X appears on the I2C bus at a default address that I verified by scanning the bus on the Artemis.
Because both sensors share the same default I2C address, I needed a method to make them addressable simultaneously. There are two common approaches:
For my robot, I chose to use the XSHUT pins, since I can easily shut or open the pin, thus allowing me to use both ToF sensors simultaneously. This is important for fast sensing.
I mounted/plan to mount one ToF sensor facing forward to detect obstacles in the direction of motion, and a second ToF sensor angled to the side to detect walls/side obstacles for future wall-following and obstacle avoidance.
Scenarios that can be missed: very thin obstacles (small cross-section), obstacles outside the sensor’s field-of-view, obstacles at extreme angles (weak return), and low obstacles that are below the optical path depending on mounting height.
I connected the Qwiic breakout to the Artemis to access the I2C lines (SDA/SCL), power, and ground. Then I connected the first ToF sensor via Qwiic cable and verified basic operation before attaching the second sensor.
I used the Apollo3 Wire example (Example05_wire_I2C) as a reference for scanning the I2C bus. The scan prints all
responding addresses and confirms that the sensor is detected and that the bus wiring is correct. From the datasheet, I expected it to be 0x52. The reason why it is actually an address of 0x29 is because in 8 digit binary, 0x52 is actually 01010010 and 0x29 actually represents the first 7 static digits. Thus, it does actually make sense.
The VL53L1X supports multiple ranging modes, typically Short and Long in the SparkFun library.
I tested my chosen mode using the vendor example (Example1_ReadDistance) to measure:
range, accuracy, repeatability, and ranging time.
I chose to do the Short mode since it was far more accurate when detecting collisions and key distances. I tried by attaching the ToF sensor to a fixed object and moved it for repeatability and measured range, thus testing it worked. I faced it to a wall and made sure it was vertical so that it was reading correctly. I tested the same scenario for both the Long and Short mode. I found that the standard dveiation for the short mode is much lower than the standard deviation for the Long mode. I did this by taking 70 points from a certain distance measured. I measured with a ruler that it was around 1.5-1.6 cm away from the wall, which does support the logic behind the Short mode. I made it this close because I am expecting the ToF sensors to be necessary for avoiding collisions more than anything. Even if I need to change to long mode later, that is fine as well, since I made a separate helper function that ensures I can switch modes easily.
After confirming each ToF sensor worked independently, I connected both sensors and ran them in parallel with the IMU. The main goal was to prove that the Artemis could service both I2C sensors (ToF + IMU) in one loop without locking up. I used a FUSED_START_RECORD case for this. After initializing both the IMU and ToF, I made sure that the FUSED is simultaneously taking in the data for both the IMU and the ToF. Below is a video showing how it works. I also showed my setup, as well as how everything has been connected and simulated/tested.
In future labs, the robot will need fast control loops, so the code cannot block while waiting for a sensor.
To enforce this, my loop continuously prints the Artemis clock (or increments a counter) as fast as possible,
and only prints new ToF readings when distanceSensor.checkForDataReady() indicates fresh data is available.
How fast does the loop execute?
I benchmarked the loop rate by counting iterations over a fixed time window and computing:
loop_hz = iterations / elapsed_time.
Current limiting factor: in practice, the limiting factor is usually the time spent in I2C transactions (sensor reads), plus any serial/BLE printing overhead. If the loop prints too much to Serial, Serial output becomes the bottleneck. My measured loop frequency was around 800 Hz without activating the ToF. With activating the ToF, it shot down to around 160 Hz (with both ToF sensors initialized). As such, it is clear that the loop itself is much faster than the sensor data being ready. Thus, the ToF sensors are clearly the limiting factor in this.
I extended my Lab 1 BLE framework so the Artemis can record sensor data for a set duration (e.g., 5 seconds), store it in arrays, and then transmit it over BLE as chunked strings. This is a buffered logging approach: sampling happens locally on the Artemis first, then the data is “dumped” afterward.
t_us, tof_front_mm, tof_side_mmt_us, ax_mg, ay_mg, az_mg, gx_dps, gy_dps, gz_dps
I plotted both ToF distance measurements against time after transferring the recorded samples over BLE. This plot is used later for obstacle detection and wall-following behaviors. I kind of oscillated it to show that both worked as intended.
Using the IMU pipeline from Lab 2, I plotted IMU-derived motion (raw accel/gyro and/or pitch/roll) against time. This provides an orientation/tilt reference that can be fused later with ToF-based navigation logic.
The main engineering constraint in this lab was timing: ToF ranging takes a non-trivial amount of time, and blocking
while waiting for a measurement would stall any control logic. The non-blocking structure using checkForDataReady() allows the
loop to keep running quickly and handle other tasks (BLE, control updates, other sensors).
The BLE logging strategy also mirrors real robotics workflows: sample data quickly on-board, then transmit it in chunks. This avoids BLE becoming the bottleneck during time-critical sampling periods.
To test sensitivity, I fixed the sensor at a known distance (e.g., 200 mm, 400 mm, 800 mm) and measured repeated readings for targets with: white matte, black matte, glossy, and textured surfaces. For each surface, I computed mean error and standard deviation, and then plotted error bars vs distance. The side ToF measured white and the front ToF measured black, with the white being a more glossy surface and the black being a more textured one. I kept the distances the same. As can be seen, there is, at best, a 10 mm difference between the two, with the white glossy seeming a bit farther away. More controlled environments may be needed, but I did not have enough material to test for difference in texture first, and then difference in color. If I was to do this with more time, I would find white and black duct tape (glossy) and also white and black towels (textured). This would ensure more consistency and accuracy in my measurements.
In this lab, I integrated dual motor drivers with the Artemis and the RC chassis to achieve fast, untethered open-loop control.
I used PWM via analogWrite() to control motor power and used the motor driver input logic to control direction.
I first verified PWM behavior on an oscilloscope using a current-limited bench supply, then transitioned to battery power and tested
forward driving, on-axis turns, and straight-line calibration.
I selected PWM-capable Artemis pins based on (1) physical routing inside the chassis (short, clean wire runs) and (2) minimizing crossing near the motor leads to reduce EMI. I used four control pins total (two per motor direction inputs).
Wiring Diagram:
The Artemis and the motors were powered from separate batteries to prevent motor transients (stall current spikes, back-EMF, and supply droop) from brownout-resetting the microcontroller or injecting noise into the logic supply. Even with separate supplies, I tied grounds together so the PWM/logic signals have a common reference.
To deliver higher current without overheating, I parallel-coupled the two channels on each dual driver so both outputs share the load for one motor. Because both channels are on the same IC (shared timing/clocking), this is a common technique to increase usable current capacity.
I initially powered motor VIN from a bench supply to reduce the risk of damage during debugging. I actually forgot to take a picture of it, so I will describe what I did. I plugged the the channel 1 oscilloscope to one of the outputs of the Motor Driver, with the ground of the probe to the ground of the board (which was connected to the grounds of the motor drivers). The initial motor driver I tested had inputs into Pins 14 and 13. I ran a PWM signal of 200 through one of the pins, delayed, and then ran a PWM signal of 200 for the other pin. I made sure the Motor Driver was grounded to the board, which was also grounded to the oscilloscope. That, in turn, was also grounded by the DC Power Supply. I used the rightmost(0-5 V DC) to supply the power. I connected the red cable to the Motor Driver's VIN and the black cable to the GND (connected to both the Motor Driver and the Artemis Nano board). Reasonable initial settings were:
I used analogWrite() to generate PWM and verified the waveform frequency and duty cycle on the oscilloscope.
I tested multiple duty values (e.g., 25%, 50%, 75%) and confirmed the duty cycle changed as expected.
analogWrite(AB1IN_RIGHT, 255);
analogWrite(AB2IN_LEFT, 0);
analogWrite(AB2IN_RIGHT, 0);
analogWrite(AB1IN_LEFT, 0);
Oscilloscope / setup screenshot:
With the car on its side (wheels elevated), I confirmed each motor could spin forward and reverse by toggling the input direction pins. This was done first on the bench supply, and then repeated on battery power.
After validating PWM/direction, I powered motor VIN from the 850 mAh battery and confirmed behavior remained consistent. I then installed all components inside the chassis, secured wiring, and tested driving on the ground with a safety timeout.
Integrated chassis photo: shows both motor drivers, Artemis, batteries, and wire routing.
I measured the PWM frequency produced by analogWrite() on the oscilloscope.
Measured frequency:
around 190 Hz
For DC motors, it is generally adequate for speed control because the motor inductance
and mechanical inertia average the PWM. A higher PWM frequency can reduce audible noise and improve smooth low-speed operation, but may increase
switching losses/heating in the driver. I could see that the motors were basically reacting to the PWM signals that they are receiving. Knowing all this, I can manually configure certain timings so that a faster PWM signal can be sent, which would allow better control of the motor.
I experimentally measured the minimum PWM needed to (a) start motion from rest (overcome static friction) and (b) maintain motion once already moving. I used a “kick then sustain” method: apply a brief higher PWM to start, then drop to the lowest PWM that maintains motion.
I found the above values to be the minimum PWM needed. I also found that it takes around 2 seconds for the static friction to "settle", thus resulting in steady state motion afterwards. The on-axis turns could've been experimented better, since I may have done it when the battery was almost running out already.
If the motors did not spin at the same effective speed, I applied a calibration factor to one motor’s PWM so the robot drove straighter. I validated this by driving along a straight reference line (tape) for at least 2 m, starting centered and still overlapping at the end.
I found that the calibration factor should be 0.7 multiplied by a PWM speed of 200, where the 200 is for the left wheels/motor and the calibration speed of 0.7 multiplied by 200 (which is 140) is for the right wheels/motor.
Finally, I demonstrated open-loop untethered control by driving forward and adding turns, with a safety timeout to prevent runaway. I made it drive forward, then turn left, drive forward, then turn right. Below is an example video, as well as the code.
In this lab, I implemented closed-loop position control so the robot could drive toward a wall and stop at a target distance of 1 ft (304 mm) using the front ToF sensor. I used a PID controller with Bluetooth-triggered execution, on-board logging, motor deadband compensation, a crash guard, and a settling routine that lets the robot move forward and backward near the setpoint instead of only coasting. I also implemented linear extrapolation so the controller could run much faster than the ToF update rate.
I used the BLE framework from earlier labs so the Artemis starts a PID run only when commanded from Python. During each run, the robot stores time stamps, raw ToF, extrapolated ToF, controller terms, and signed PWM in arrays. After the run completes, the data is sent back over BLE in chunks and plotted in Jupyter. This made gain tuning much faster because I could change gains over BLE without reflashing.
// representative control structure
if (new_tof) ctrl_mm = raw_mm;
else ctrl_mm = tof_extrapolate(now_us);
err = ctrl_mm - pid_setpoint_mm;
u = Kp*err + Ki*i_term;
pwm = apply_deadband_and_clamp(u);
I chose a PI controller because proportional control alone was fast but left a steady-state error, while derivative control was not necessary for this task and would have amplified ToF noise. A representative gain set for the plotted trials was Kp = 0.12, Ki = 0.05, Kd = 0. I also added motor deadband compensation so small controller outputs were mapped to a PWM large enough to overcome static friction.
To improve robustness, I added three practical protections. First, a crash guard stops the robot if the raw ToF reading falls below a safety threshold. Second, a settling mode switches to gentler bidirectional corrections near the goal so the robot can recover from overshoot. Third, active braking is used only for the final stop, while normal zero command uses coasting.
From repeated runs starting roughly 1.5–4 m from the wall, the robot consistently reached the neighborhood of 304 mm without colliding. The raw and estimated ToF curves track closely, and the signed PWM plot shows the controller reversing direction after overshoot, which confirms that the system is actually regulating around the setpoint rather than simply stopping once it gets close.
A major goal of this lab was to decouple controller rate from sensor rate. In my data, the PID loop ran substantially faster than the ToF sensor. Representative measured rates were about 133–167 Hz for the PID loop and 18.6–19.4 Hz for the ToF sensor, so the controller ran about 6.9× to 9.0× faster than the sensor. This was done by computing control every loop and only updating the measured distance when a fresh ToF sample arrived.
Instead of reusing the last ToF value directly, I extrapolated the current distance from the last two readings:
slope = (d2 - d1) / (t2 - t1)
d_est = d2 + slope * (now - t2)
The resulting plots show that the extrapolated estimate closely follows the stepwise raw measurement while remaining smooth between ToF updates. This satisfies the lab requirement that the PID loop run faster than the sensor while still using a current distance estimate.
I have also included a different trial, where the robot was placed much closer than the previous trial, just to show that my PID works for many different instances, not just one distance.
Because the robot can begin far from the wall, the controller output may saturate for a long time. Without protection, the integral term can continue accumulating during saturation and then cause a large overshoot once the robot gets near the setpoint. To address this, I added an integral clamp and conditional anti-windup logic. I also added a BLE command so wind-up protection can be turned on and off for comparison.
This protection is necessary because the drivetrain is nonlinear: deadband, floor friction, and battery level all change the relationship between PWM and motion. A pure integrator that keeps growing while the actuator is already saturated stores unnecessary error and makes recovery slower and less repeatable. As can be seen, the overshoot is much more severe, with it also taking longer for it to settle. I added 2 videos comparing the windup on and windup off as well.
Overall, the most important practical lessons were that loop timing matters, motor deadband must be handled explicitly, and a reliable controller benefits from simple “hacks” such as crash guards and settling logic. I did not include a derivative term in the final demonstrated controller because the ToF signal is relatively low-rate and noisy compared to the controller loop, so derivative action was more likely to amplify measurement noise than improve performance. The extrapolator, however, provided a clear benefit because it let the control loop run much faster than the sensor without waiting for new data each cycle.
The maximum linear speed during the wall approach was approximately 0.45 m/s (about 450 mm/s), estimated from the steepest slope of the logged ToF distance-versus-time data. I'm sure it could go a bit faster than this, but that would cause it to overshoot and hit the crash guard at times. This is the most reliable speed I have seen thus far.
In this lab, I implemented closed-loop yaw orientation control for the robot using the IMU. Unlike Lab 5, where the control objective was linear distance to a wall, this lab required the robot to rotate in place to a commanded heading. I used the gyroscope z-axis measurement from the ICM-20948, estimated yaw by integrating angular rate over time, and applied a feedback controller that commands the left and right wheels with equal magnitude and opposite direction so the robot turns in place.
My final controller used a PI structure with wind-up protection enabled. For the run shown here, I used gains of Kp = 3.3, Ki = 0.08, and Kd = 0, with a setpoint of whatever I want to choose. For the sake of the experiment, and how easy it is for eye-testing, I chose both 90 and -90 degrees, among other angles just to test if it worked. Videos of all my runs are shown below. The controller successfully rotated the robot to the target angle with a small overshoot and then settled near the setpoint.
As in the previous labs, I used a BLE-based workflow so that I could start controller runs, change gains, calibrate the IMU bias, and dump logged data without reflashing the Artemis each time. This was especially important for this lab because orientation controllers need repeated tuning, and it is much faster to adjust parameters from Jupyter than to recompile and upload after every test.
On the Arduino side, I added a dedicated group of yaw-control commands:
YAW_CALIBRATE_BIAS, YAW_SET_GAINS, YAW_SET_SETPOINT,
YAW_START, YAW_STOP, YAW_GET_HEADER, and
YAW_DUMP_NEXT. On the Python side, the notebook sends these commands over BLE, waits for the response,
and then parses the returned data into arrays for plotting.
print(send_and_read(CMD.YAW_CALIBRATE_BIAS, "300", timeout=4.0))
print(send_and_read(CMD.YAW_SET_GAINS, "3.3|0.08|0.0", timeout=1.0))
print(send_and_read(CMD.YAW_SET_SETPOINT, "90.0", timeout=1.0))
print(send_and_read(CMD.YAW_START, "", timeout=1.0))
I also made the controller non-blocking with respect to Bluetooth. Inside the main loop, the Artemis continuously services BLE, the IMU, and the yaw controller. Because of this structure, the robot can keep running the controller while still processing Bluetooth commands and later dumping the recorded data. This is important not only for tuning but also for future labs where setpoints may need to be changed while the robot is moving.
The yaw estimate was obtained by integrating the gyroscope z-axis angular velocity:
yaw_est_deg = yaw_est_deg + (gz_dps - yaw_bias_dps) * dt
Before each run, I kept the robot still and estimated the gyroscope bias by averaging 300 samples. This was necessary because even a small constant bias would accumulate into heading drift over time when integrated digitally. Since yaw is obtained by integrating gyro rate, any offset in the sensor directly turns into growing angle error. Bias calibration significantly reduced this drift for the time scale of the controller runs.
One thing to note is that taking the derivative of the yaw error would effectively differentiate a signal that already depends on
an integrated gyro measurement. In practice, I found that derivative action was not needed here. The IMU already gives angular velocity,
and the proportional plus integral terms were enough to achieve the desired turn. Leaving Kd = 0 also avoided derivative
kick and reduced sensitivity to noise.
I implemented a PI yaw controller with optional wind-up protection. The controller error is the wrapped difference between the desired heading and the estimated heading:
err = wrap_deg(yaw_setpoint_deg - yaw_est_deg)
u = yaw_kp * err + yaw_ki * yaw_i_term + yaw_kd * derr
The wrap function keeps the error in the range of -180 to 180 degrees, which ensures the robot always takes the shortest turning direction. I also used a small stop band near the target. If the error magnitude is below 2 degrees, the PWM command is set to zero so the robot does not chatter around the setpoint.
Because the motors have static friction and a deadband, small controller outputs are not strong enough to make the robot turn. To handle this, I enforced a minimum turn PWM whenever the robot is outside the stop band. In my code, positive turns and negative turns use slightly different minimum PWM values, which compensates for asymmetry in the drivetrain.
I enabled integrator wind-up protection by clamping the accumulated integral term. This is useful because when the robot begins far from the setpoint, the controller output can remain saturated for some time. Without clamping, the integral term would continue growing even though the motors are already at their effective limit, which would produce more overshoot once the robot nears the target.
For the trial shown below, the robot started near 0 degrees and was commanded to rotate to approximately 90 degrees in place. My plotted heading is shown in positive magnitude, thus the response converges to about 90 degrees. The measured angle rose quickly, overshot slightly to about 94 degrees, and then settled near the commanded setpoint.
The top plot shows the yaw response. The bottom plot shows the signed turn PWM. At the beginning of the maneuver, the controller commands a large positive PWM to rapidly rotate the robot. As the robot approaches the target, the commanded PWM drops. A short negative pulse appears after the overshoot, which corresponds to the controller correcting the heading back toward the setpoint. After that, the PWM remains near zero, indicating that the robot has settled.
The gyroscope z-axis plot confirms the turning behavior. The initial large positive angular velocity corresponds to the robot rotating toward the target, and the negative angular velocity spike corresponds to the correction after overshoot. The yaw error plot drops rapidly from about 90 degrees toward zero, briefly crosses past zero due to overshoot, and then settles close to zero.
The measured mean controller time step was about 0.00456 s, corresponding to a control rate of about 219.3 Hz. This is fast enough for the orientation control task and is substantially faster than the human-observable motion of the robot. Running the controller at this rate allowed the robot to respond quickly while still leaving time in the loop for BLE servicing.
The main practical issues in this lab were gyro drift, actuator deadband, and controller saturation. Gyro integration alone is never perfect because bias causes drift over time, but for the time scale of these turning maneuvers, bias calibration made the integrated yaw estimate accurate enough to control orientation. The deadband problem was significant because the robot would not turn at all for small PWM values, so adding minimum turning PWM thresholds was necessary. Saturation also made wind-up protection important, since the controller often begins with a large heading error.
I chose not to use derivative control in the final implementation. Since the gyroscope already measures angular rate directly, the proportional and integral terms were sufficient to produce a fast response. The final controller achieved the lab goal of stationary orientation control, and the robot was able to rotate to the commanded heading with only modest overshoot and good settling behavior.
In future labs, this controller can be extended by updating the setpoint in real time during motion, or by combining a forward drive term with the turn command so the robot can regulate heading while translating. The current BLE-driven structure already supports that kind of extension because the control loop is non-blocking and can continue processing commands while running.
Multiple Tests:The robot was turned 90 or -90 degrees, with it getting kicked to show PID control.
In this lab, I implemented a Kalman Filter to supplement the slowly sampled ToF distance measurements used in my Lab 5 PID position controller. The Kalman Filter fuses a dynamics model of the robot with noisy sensor readings to produce a faster, smoother estimate of the robot's distance from the wall. This allows the control loop to run at a higher rate than the ToF sensor alone, which is critical for driving toward the wall at higher speeds without crashing.
The lab involved four main steps: (1) performing a step response to estimate the drag and momentum parameters of the robot, (2) building the state space model and initializing the Kalman Filter in Python, (3) sanity-checking the filter on recorded data in Jupyter, and (4) implementing the Kalman Filter on the Artemis and integrating it with my existing PID controller.
To build the state space model, I needed to estimate the drag coefficient d and the momentum term m. I did this by driving the robot toward a wall at a constant PWM of 150 (approximately 59% of the maximum) while logging the front ToF sensor output and the motor input. The step response ran for 4 seconds, which was long enough for the robot to accelerate and approach the wall.
I added a dedicated STEP_START BLE command to the Artemis that drives the motors at a constant PWM while
recording time-stamped ToF readings. After the step completes, the data is dumped over BLE and saved to a CSV file so it
persists across kernel restarts.
The three required plots are shown below: the constant motor input (PWM = 150), the ToF distance over time, and the computed velocity from finite differences of the ToF readings. The x-axis is in seconds as required.
Looking at the ToF plot, the robot started about 300 mm from the wall, moved away briefly (the front sensor saw increasing distance as the robot initially moved in the opposite direction relative to the sensor), peaked around 1200 mm near t ≈ 0.9 s, and then approached the wall rapidly, reaching near 0 mm by t ≈ 2.3 s. The velocity plot shows a large spike around t ≈ 0.9 s which is a noise artifact from the abrupt ToF jump, not a real speed. The usable descent portion (1200 mm to 0 mm over about 1.3 seconds) gives an approximate steady-state speed.
From the step response data, I estimated the following parameters. Using the lab instructions' formulas with a normalized unity input (u = 1):
Using d = u / v_ss = 1/900 ≈ 0.001111 and m = −d × t_0.9 / ln(1−0.9) ≈ 0.000386,
I obtained the following:
The time constant τ = m/d ≈ 0.347 s is reasonable for the robot's dynamics at this PWM level. Note that the step response was not perfectly clean because the robot hit the wall before fully reaching steady state, so these values are approximate. However, the Kalman Filter is robust to moderate model errors because the covariance tuning compensates for model inaccuracy.
The robot's 1D dynamics are modeled as a second-order system with drag:
F = u − d·ẋ
m·ẍ = u − d·ẋ
State: x̄ = [x, ẋ]ᵀ (position and velocity)
A = [[0, 1 ], B = [[0 ],
[0, −d/m ]] [1/m ]]
C = [−1, 0] (ToF measures negative distance from wall)
I then discretized the matrices using the Euler method with a sampling time of dt ≈ 0.0287 s (approximately 34.9 Hz, the average ToF sample period from my data):
Ad = I + dt · A
Bd = dt · B
The Kalman Filter requires specifying how much we trust the model versus the sensor. I set the noise covariances as follows:
The relative values of Σ_u and Σ_z determine the filter's behavior. Larger process noise means less trust in the model (the filter follows the sensor more closely), while larger measurement noise means less trust in the sensor (the filter relies more on the model prediction). I discuss the effect of tuning these in the experiments section below.
I implemented the Kalman Filter using the function provided in the lab instructions, with variable names following the lecture convention:
def kf(mu, sigma, u, y):
# Prediction step
mu_p = Ad.dot(mu) + Bd.dot(u)
sigma_p = Ad.dot(sigma.dot(Ad.T)) + Sigma_u
# Update step
sigma_m = C.dot(sigma_p.dot(C.T)) + Sigma_z
kkf_gain = sigma_p.dot(C.T.dot(np.linalg.inv(sigma_m)))
y_m = y - C.dot(mu_p)
mu = mu_p + kkf_gain.dot(y_m)
sigma = (np.eye(2) - kkf_gain.dot(C)).dot(sigma_p)
return mu, sigma
I ran the Kalman Filter on my step response data (140 data points over about 4 seconds) to verify that it produced reasonable estimates before deploying it on the robot. The initial state was set to the first ToF reading with zero velocity, and the motor input was scaled by dividing by the step size (255) to normalize it.
The top plot below shows the KF position estimate (red) overlaid on the raw ToF readings (blue dots). The KF tracks the raw data closely while providing a smoother trajectory. The bottom plot shows the KF velocity estimate (green), which remains smooth and physically reasonable even though the finite-difference velocity (blue dots) has extreme spikes due to the noisy ToF jumps. This demonstrates the key advantage of the Kalman Filter: it produces a smooth, reliable state estimate even from noisy, slowly-sampled sensor data.
To understand how the covariance matrices affect the filter, I ran three experiments:
These experiments confirm the expected behavior from the lecture: the relative balance between Σ_u and Σ_z determines whether the filter favors the model or the sensor. For my application, the baseline parameters worked well because the model is only a rough approximation (the step response did not cleanly reach steady state), so I wanted the filter to still rely significantly on the ToF readings while smoothing the noise between measurements.
I implemented the Kalman Filter on the Artemis using the BasicLinearAlgebra library for matrix operations.
The implementation includes three main functions:
kf_predict(u) — runs the prediction step every control loop iteration using the current motor commandkf_update(z) — runs the update step only when a new ToF measurement arriveskf_reset(initial_tof) — initializes the state from the first ToF reading at the start of each PID run// Prediction step (runs every loop)
void kf_predict(float u_normalized) {
Matrix<1> u_mat = {u_normalized};
kf_state = Ad_kf * kf_state + Bd_kf * u_mat;
kf_Sigma = Ad_kf * kf_Sigma * ~Ad_kf + kf_Sigma_u;
}
// Update step (runs only when new ToF arrives)
void kf_update(float z_measurement) {
// ... compute Kalman gain, innovation, update state and covariance
}
I integrated the KF into my existing pid_service() function with a toggle flag kf_enabled.
When the KF is enabled, the controller uses the KF position estimate instead of the linear extrapolation from Lab 5.
The KF parameters (d, m, dt, and noise covariances) are sent from Python over BLE using a KF_SET_PARAMS
command, which allows tuning without reflashing.
I sent the computed d, m, dt, and noise standard deviations from Python to the Artemis:
The plot below shows the KF-PID controller running on the actual robot. The red line is the KF estimate computed on the Artemis, and the blue dots are the raw ToF readings. The green dashed line is the 304 mm (1 ft) setpoint. The KF estimate tracks the raw ToF data closely while providing the smooth, continuous estimate that the PID controller uses for its error computation.
The robot started near the setpoint, moved away (the bump to ~1200 mm around t ≈ 2 s is the same sensor behavior seen in the step response), and then returned toward the wall, eventually settling near the setpoint by about t ≈ 6 s. The PWM plot shows the controller commanding forward motion when the robot is far from the wall and then switching to the settling routine near the setpoint. The brief negative PWM spike near t ≈ 7 s corresponds to the settling controller correcting a small overshoot.
Video of the KF-PID run:
The final parameter summary for the Kalman Filter implementation:
Key values:
The Kalman Filter successfully replaced the linear extrapolation from Lab 5 with a model-based estimator. The main advantage is that the KF uses a physics-based prediction (incorporating drag and motor input) rather than a simple linear slope, which means the estimate degrades more gracefully between sensor updates, especially when the robot is accelerating or decelerating.
One challenge I encountered was the sign convention. The state space model uses C = [−1, 0], meaning the
measurement is the negative of the position state. This caused an initial bug where the KF estimate diverged in the
opposite direction from the raw data, both in the Python sanity check and on the robot. The fix was straightforward:
pass the positive ToF reading directly as the measurement (since the C matrix already handles the sign inversion internally).
Another observation is that the step response data was not ideal for system identification. The robot hit the wall before reaching a clear steady-state speed, so the estimated d and m are approximate. Despite this, the Kalman Filter performed well because the covariance tuning compensates for model inaccuracy. When the model is wrong, the Kalman gain automatically increases its trust in the sensor measurements. This is one of the practical strengths of the KF framework: it does not require a perfect model to produce useful estimates.
The noise comparison experiments confirmed the theoretical prediction from the lectures: increasing process noise makes the filter track the sensor more closely, while increasing measurement noise makes it smoother but more reliant on the model. For my robot, the baseline parameters provided a good balance, keeping the estimate close to the raw ToF while filling in between the ~35 Hz sensor updates with model-based prediction.
For future work (Lab 8), this KF infrastructure enables running the PID controller at a much higher effective rate, since the KF can predict the robot's state between ToF measurements. This should allow driving toward the wall at higher speeds while still maintaining control authority to stop at the setpoint.
I'm an MEng student majoring in Electrical Computer Engineering at Cornell University.
This is a record of all the labs I have done for ECE 5160.