Power analysis with Rigol DS1000Z

Recently, I had to analyze the power profile of a microcontroller at a specific point in time. This article will cover the required steps to perform such measurements with Rigol DS1054Z/1104Z oscilloscope.

Contents:

Python interface

You can connect to the oscilloscope via USB or LAN/LXI. Since it’s only USB2.0, you might think that LAN would be much faster, however this is not the case - download speed over LAN is painfully slow.

In order to interact with the scope from python first we need to install pyvisa-py and pyusb modules via pip. Now we can send SCPI commands that are described in DS1000Z Programming Guide.

This is where things get ugly. Simple queries worked fine, but as soon as I tried to read the waveform I encountered either a timeout or the following error:

1
2
3
4
5
6
7
8
9
usbtmc.py:115: UserWarning: Unexpected MsgID format. Consider updating the device's firmware.
See https://github.com/pyvisa/pyvisa-py/issues/20

...

File "/usr/lib/python3.11/site-packages/pyvisa/resources/messagebased.py", line 486, in read
message = self._read_raw().decode(enco)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128)

At the moment of writing the article I was using python 3.11 with pyvisa-py 0.6.2. Firmware was already up to date and increasing the timeout didn’t help at all. It seems like every time a try any python wrapper around libusb it simply doesn’t work..

After half an hour of debugging I found that setting chunk_size to 32 produces stable(-ish) results on both of my machines. If that doesn’t help, you might want to try setting RECV_CHUNK to 32 and max_padding to 0 in pyvisa_py/protocols/usbtmc.py

Collecting waveforms

Be default :WAV:DATA? command will return the on-screen memory, which is limited to 1200 points. Since we need slightly more than that, we’ll have to perform a ‘deep-memory’ read. We can read only 250k points each time, so the memory has to be read out in chunks. After some trial and error I found out that higher values like 500k work as well, although stability degrades. By ‘stability’ I mean the amount of retries required to read each chunk successfully - even 250k read might take 1-2 attempts to complete successfully.

The following script assumes that we trigger on CH2 and capture CH1 waveform. For each trace we’re arming the scope, waiting for the acquisition to complete, performing deep-memory read and dumping the results into csv file. Things like memory depth and trigger settings should be configured on the scope manually.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
import time
from pyvisa import ResourceManager
from math import ceil

rm = ResourceManager('@py')
for r in rm.list_resources():
if 'USB' in r:
print('Connecting over USB')
rig = rm.open_resource(r)
break
else:
print('Scope not found')
exit()

rig.timeout = 1500
rig.chunk_size = 32
max_points = 250_000 # lower value = more stable

print('device:', rig.query('*IDN?'))
rig.write(':WAV:MODE RAW')
rig.write(':WAV:FORM BYTE')

mem = int(rig.query(':ACQ:MDEP?'))
if max_points > mem: max_points = mem
f = open(time.strftime('%b-%d-%Y_%H-%M-%S', time.localtime()) + '_trace.csv', 'w')

for trace in range(100):
# single capture
rig.write(':SING')
time.sleep(0.3) # STOP->WAIT transition takes a while
print('waiting for trigger..')
while True:
if 'STOP' in rig.query(':TRIG:STAT?'):
break
time.sleep(0.1)

# deep memory read
buf = []
for i in range(ceil(mem / max_points)):
start = i * max_points
stop = start + max_points
stop = mem if stop > mem else stop
time.sleep(0.01)
rig.write(f':WAV:STAR {start + 1}')
rig.write(f':WAV:STOP {stop}')

for retries in range(10):
try:
tmp = rig.query_binary_values(':WAV:DATA? CH1', datatype='B')
if len(tmp) != stop - start:
print(f'got {len(tmp)}/{stop - start} bytes - retrying')
continue
buf += tmp
break
except:
print('retrying')

f.write(','.join([str(v) for v in buf]) + '\n')

f.close()
rig.close()

Aligning traces

Since we’re triggering on a different channel and our event of interest occurs some time after the trigger, our traces most likely will have some misalignment due to various reasons like clock jitter, which we’ll need to learn how to deal with.

Let’s say we have two signals a and b offset by phase:

We can ‘align’ them by using cross-correlation. Applying some filtering prior to that is also a good idea, but we’ll skip this step for now. Cross-correlation doesn’t play well with signals that have some DC bias and can produce false peaks, so we remove that first. Next we find the largest peak - this is the point where our signals align.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import numpy as np
from scipy import signal
from scipy.ndimage import shift

# remove DC offset
a -= np.mean(a)
b -= np.mean(b)

# cross-correlation
cor = signal.correlate(a, b)

# find maximum
px, py = signal.find_peaks(cor, height=0)
peak_x = px[np.argmax(py['peak_heights'])]
Cross-correlation

Cross-correlation produces a sequence of length 2n - 1 symmetrical around a single point with ‘zero’ being in the middle of the x-axis. Our phase shift is the x-coordinate of the maximum relative to ‘zero’. Knowing the offset, all we have to do is shift one of the signals. Obviously we loose some data after performing the shift - you can throw these points away from both traces or fill them with some fixed value instead (in this case - median).

1
2
3
# align traces
offset = len(a) - peak_x - 1
b = shift(b, -offset, cval=np.median(b))
Aligned traces

Processing data

Now we can process the captured traces. Of course we can load everything into memory, but in case we might need gigabytes worth of traces (good luck capturing those with this scope), we’re going to read the capture file line-by-line, align each trace with the reference trace and apply a simple moving average.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import matplotlib.pyplot as plt
import numpy as np
from scipy import signal
from scipy.ndimage import shift

with open('traces.csv', 'r') as f:
# process the first trace and use it as reference
a = [int(i) for i in f.readline().split(',')]
a -= np.mean(a)
averaged_trace = a[:]

n = 0
l = f.readline()
while l:
b = [int(i) for i in l.split(',')]
b -= np.mean(b)

# find the offset using cross-correlation
cor = signal.correlate(a, b)
px, py = signal.find_peaks(cor, height=0)
peak_x = px[np.argmax(py['peak_heights'])]
offset = len(a) - peak_x - 1 # a -> b offset

# align the trace
b = shift(b, -offset, cval=np.median(b))

# moving average
averaged_trace = [(b[i] + n*averaged_trace[i])/(n + 1) for i in range(len(b))]

n += 1
l = f.readline()

plt.grid(True)
plt.plot(a)
plt.plot(averaged_trace)
plt.show()

Conclusion

Even though DS1054Z is definitely not the best tool for this job, it can still get the job done, especially with a bit of patience. Below is the example of one of the sets of traces that I captured. As usual, code is on github.