The photo-world is all agog over Sony’s just-announced (but not shipping till next year) high-end ($6K) camera, the ɑ9 III, because it has a “global sensor”. No, the “global” label didn’t mean anything to me either, when I first read it. The write-ups about it have explainers and cool pictures (PetaPixel, DPReview). I found myself wondering “What is this thing’s bandwidth?” and thus this note. I’ll toss in another little explainer so you don’t have to click on a link like a savage.

Sony ɑ9 III

Non-global shutters · A digital camera sensor has millions of pixels arranged on a grid (in non-obvious and geometrically interesting ways, but let’s not go there); they are analog devices that measure how many photons hit them. To produce a digital image the camera runs a bunch of voodoo across the sensors to produce a digital integer array that can be saved in memory and eventually displayed as a colored image on a screen.

But wait, how does the camera go about arranging for the photons to hit the sensor? Well, there can be an actual physical shutter that opens for a very short time and then closes again, or there can be a purely electronic-mode “shutter” that turns on the pixels then reads the values off them after enough time has passed.

But a physical shutter takes nonzero time to traverse the face of the sensor, so the pixels at the top are not exposed at the same instant as the pixels at the bottom. (Of course it’s more complicated than that, there are shutter geometries and other design tricks but let’s not go there.) Which is normally OK, but suppose you’re taking a picture of something that’s moving fast. Then you can get what’s called “banding” or “rolling shutter”, usually shows up as unwanted curvature. There are other problems with synchronizing a flash (but I don’t use those) and in video mode.

Electronic shutters don’t make this problem go away. The pixels are arranged in an array (On my Fujifilm X-T30 6240x4160, on my Pixel 7 4080x3072) and are typically read off about as you’d expect, a row at a time. Which in practice is like a shutter.

“Global shutter” · You’ve likely already figured it out. These things advertise that they read all the pixels off the sensor at once. So, no matter how fast your subject is moving, you’ll get an image of what it really looked like. And those flash and video problems vanish. And because circuits are faster than shutters, you can shoot at an eighty thousandth of a second.

All of which probably doesn’t do much for me, I take pictures of oceans and flowers and trees mostly. But for people who shoot sports or wildlife or in extreme lighting situations, this is probably a big deal. And there’s no need for a physical shutter at all; any time you can get rid of a moving part, that’s a win.

“Simultaneously?” · One result of all this is that the ɑ9 III can take 120 shots/second. At this point I should mention that it has 24.6M pixels, small by modern high-end-camera standards. So, first of all I was wondering how you read those data points “simultaneously”. I’m not a microelectronics whiz but a few jobs ago I learned a lot about memory controllers and, well, that’s a lot of integers to move all at once. Then I wondered, what’s the bandwidth at 120 frames/second?

The first question that arises is, how many bytes is 24.6 million pixels? Starting with, how many bits per pixel? The answer to this is less obvious. My first assumption was that since the pixels on my screen have 24 bits of RGB information it’d be three bytes/pixel, but no, each pixel only measures the dynamic range of one color, then a process called demosaicing produces the RGB pixels. so I thought maybe just 8 bits/pixel? As with everything else, it’s more complicated than that; the answer seems to be somewhere between 10 and 16 bits/pixel.

So I scribbled some Ruby code, whose single argument is a guess at the number of bits per pixel, and computes how many GB/second those 120 shots are. Here’s the Ruby in case you want to check my arithmetic.

def data_rate(bits_per_pixel)
  pixels = 24.6 * 10**6
  shots_per_second = 120
  bits_per_sensor = pixels * bits_per_pixel
  bytes_per_sensor = bits_per_sensor / 8.0
  bandwidth = bytes_per_sensor * shots_per_second

bpp = ARGV[0].to_f
bw = data_rate(bpp)
bw_in_g = bw / 10**9
puts "data rate #{bw_in_g}G"

If you trust that Ruby code, at 10 bits/pixel, the camera is moving 3.69GB/sec; 5.90GB/sec at 16. Which I think is a pretty neat trick for a consumer product, even a high-end one.

The future · It seems likely that global shutters will probably get a lot cheaper and become a feature of almost every serious camera. Because those circuit designers and semiconductor-wranglers are really smart people, and you just know they’re going to find a whole lot of ways to take this v1.0 implementation and make it cheaper and better.

What’s interesting is, it’s not obvious to me whether or not global shutters wil be ubiquitous in mobile-phone cameras. They have way more CPU but way less room inside. We’ll see.

But, you know what, I’d sort of thought that we were in a plateau of excellence in camera design, wasn’t expecting any really significant new features to show up. But what with C2PA and now this in the last couple of weeks, it’s pretty clear I was wrong. Fun times!


Comment feed for ongoing:Comments feed

From: Scott Laird (Nov 13 2023, at 09:47)

There's a limited amount of progress happening on the still photography side, but video still has a ways to go on SLR-like cameras. Go take a look at which resolutions/frame rates/bit depths/codecs common cameras support, and you'll see a bizarre mismash where different codecs or resolutions limit your options in really non-obvious ways.

The A9iii, for instance, can do UHD 4k in 4:2:0 color at 119.88 fps (200 Mbps), but stepping up to 4:2:2 color drops the maximum frame rate to 59.94 fps (but the available bitrate jumps to 600 Mbps).

The Nikon Z8 is even weirder; it can shoot >8k raw at 60 fps, but UHD 8k compressed drops to 30 fps. In UHD 4k, ProRes is capped at 29.97 fps, but ProRes *Raw* can only do 50 or 59.94 fps. Meanwhile, H.265 can do up to 120 fps.

In both cases, presumably they couldn't get enough performance out of one stage of their image pipeline to support a full mesh of resolution/frame rate/bit depth. Weirdly, raw flash write rates aren't usually the issue any more, unlike a few years ago where SDXC was as good as things got.

My Panasonic GH6 is a couple years old, but practically every single video setting has implementation-defined limits at the edge. Want 4:2:2 with all i-frames? You can't use UHD 4k, but DCI 4k is fine. Want ProRes? Capped at 60 fps. Heck, some modes are only available when connected to AC power, as the battery apparently doesn't provide enough current consistently. Trying to figure out how to get the best quality out of it at 4k 120 fps (for slow motion) is *hard*.

Hopefully in a few more years this will all settle out, as sensor resolution mostly stalls but video processing and storage speeds slowly increase.


From: White_Rabbit (Nov 16 2023, at 03:17)

Hi! super interesting, but I find it strange to talk about the Alpha as a "consumer" product. Does Sony sell only consumer cameras? or what is their professional offering?


author · Dad
colophon · rights

November 10, 2023
· Arts (11 fragments)
· · Photos (980 fragments)
· · · Cameras (75 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!