Back in 2007, the world was introduced to the now legendary Velodyne HDL-64E unit. Quite distinctive in its size and general bulkiness, it soon was referred to as the "KFC bucket". Cars popped up all over the place with this spinning unit on its roof. It was quite bulky, but perfectly suited for the early days of Autonomous Vehicle (AV) research.

Several years later, Velodyne released a number of more practical units. First, the Puck, a 16-beam miniaturized Lidar unit that really kick-started the Lidar revolution. This was followed soon after by its bigger brother, the VLP-32, a 32 beam version which would become the gold-standard for both automotive and non-automotive use for many years.

As word spread about these new, somewhat affordable Lidar units, the great Lidar-hype began. Companies popped up everywhere and had to find ways to break into the market. One of the things that kept Lidar from getting much attention outside of the industry-insiders was the relatively low resolution compared to cameras.
For example, the below image will not impress anyone.

But, if you just keep adding more and more points, it starts to look something like this:

Now, this grabs people's attention at shows like CES!
More and more beams were being packed in smaller and smaller units. 32 beams became 64 beams which became 128 beams! Single-return became Double-return became Triple-return. More companies joined in! The somewhat practical network output of the 'gold standard' Puck went from a reasonable 300,000 points-per-second (pps) to a whopping 5.2M pps. There were rumors of a 256-beam Lidar being developed and I even heard someone speak of a 1024-beam unit being considered.
The Beam Wars were in full swing.
And here the theoretical world of the lab and the practical world, once again, collide. Instead of thinking about what was best for the customers, Lidar companies looked to one-up each other and just adding more and more resolution to their units with no evidence that this would actually improve the end-product. What we ended up with is a number of Lidar sensors that had bad range noise, severe electrical and mechanical vibration, heat management problems and reliability issues.... but they sure looked pretty when they worked!
Just because the images look prettier, does it actually make the end-product better? A computer sees (processes) the world very differently from a human and it does not care how pretty an image looks. It looks for patterns and requires accurate data.
The case against resolution.
If you wonder why more resolution is not always a good thing, you only have to ask yourself one simple question:
"If more resolution is always better, why isn't every security camera in the world an 8K resolution camera?"
The answer is quite simple: All that data has to go somewhere. It really is that simple. The camera industry understands this because they have been around for decades and they have learned from their customers. The most common security camera sold is 720p or 1080p resolution. It is not 2K/4K/8K.
An average bitrate for a common 1080p camera at 30fps is around 2.5Mbps. Compare this to the average bitrate of a standard 32-beam Lidar unit (640Kpps) which can be 25 mbps running at only 10fps (if optimized). That is ten times higher than a camera. A 128 beam Lidar (~5.2Mpps) will be nearly 100 times more data per sensor than a camera (254Mbps vs 2.5Mbps). This puts an enormous pressure on the network infrastructure as well as the processing systems.
Cameras have sophisticated chip-driven compression algorithms. In a best-case scenario, this can reduce the data rate by up to 2000:1. Which, in security fixed cameras is actually quite common because the cameras are static and compression algorithms love static backgrounds. These compression algorithms and mechanisms simply do not exist (yet) for Lidar, so the network bandwidth needed to transport Lidar data remains many times larger than camera data.
Just as I mentioned before in previous articles, this is a lab vs real-world issue. In a lab environment and when you are developing 3D Perception software in a small space with a powerful development desktop, more resolution will give you better results, most of the time.
But now we're entering my world (speaking for customers). The practical world. Security, for example, a medium sized high-security facility may require 50 sensors. 50 Lidar at 254Mbps equals 12.7Gbps worth of data that needs to be processed every second. That fancy fiber-ring system you installed for your cameras and which you thought had plenty of headroom? Sorry bud can't use it any more...It typically holds only 1 Gbps of data. Upgrading all these modems is expensive. Even your existing 10G switches are no longer going to be enough. Your servers are becoming a bottleneck because they only have four ethernet ports and you need two for your client network, so that leaves you with only two to push all the sensor data into the server. You need larger CPU's (and possibly GPU's) to parse and process all that data, most of it being completely useless because half the beams are pointing up or in the wrong direction.
What's becoming even more important is that (non-security related) sites are moving more and more towards wireless solutions. If I take intersections again as an example. The ideal solution is to transmit the raw sensor data to the central Road-Side Unit (RSU) for processing. At 254Mbps, you are no longer able to use the 2.4Ghz or even the most advanced 5GHz WiFi standard reliably.
I have worked with companies on point-to-point WiFi for several years and 32 beams is about the upper limit of what the wireless system could reliably handle. It would not stand a chance with 128 beams.
The saddest part about all of this? It is completely unnecessary!
I know from practical experience that I can take a 128 beam Lidar; disable 64 beams (at least) in my recommended non-uniform configuration (not the skip-a-beam configuration) and I will see no discernable degradation in object detection and tracking performance. It might actually improve! Sadly, most Lidar won't give you this option, or worse, will give you the option, but then fill the data stream with zeros and still fire the lasers anyway.
In fact, I can guarantee you that I can get better performance with 4x MQ-8 sensors (24 beams total) than a single 128 beam sensor. It won't even be close in frame-by-frame accuracy.
Most importantly, I can save thousands of dollars in network infrastructure and computing power this way.
The other, significant problem that I, and others, have been dealing with is that unrealistic high resolutions make for very pretty pictures and tends to 'wow' people who have little understanding of how perception systems actually function. This leads to the "we want this unit!" phenomenon where a specific Lidar sensor is selected by the end-customer before the system is even designed. This forces the designers and installer to work with sub-optimal units which leads to the following scenario:
In Proof-of-Concept situations, the unit will often perform quite nicely, and it passes the PoC success criteria. Everyone is happy. However, when deployments are being designed and calculated, the cost of the unit(s) and especially the added cost of network infrastructure to move all this excessive data around drives the cost of the system through the roof. Sticker shock sets in and the person who has to sign the check is often not the same person who said: "we want this unit!" earlier. The designers and installers are then forced to redesign their system and when they finally manage to convince the end-customer that the selected unit was not the right one, the whole PoC process has to start all over again.
I have seen a number of projects not get beyond a very successful PoC stage for this very reason. Business-wise it is this kind of short-term thinking that is killing long-term revenue and seriously hurting the industry.
A plea to the Lidar industry
Lidar companies, please, stop peddling the highest-resolution Lidar for every project and shooting yourself in the foot long term. Leave it to the experts to decide which unit is best.

It is my recommendation that Lidar companies worry less about adding more resolution to their units and rather shift focus to making their units behave more like cameras, with built-in hardware accelerated compression algorithms, an optimized payload that minimizes the bit count and number of empty fields being transmitted and moves any non-critical information to lower frequency packets.
Work with the experts to define the optimal beam patterns. Learn and understand Field-of-View requirements. Learn what the real-world bandwidth limitations are. Work WITH the customers to define a better and more efficient API. All this will reduce the system cost and make Lidar a much more viable unit to compete against camera and radar.
Think about the system, not just the sensor.
Some positive development
There is some good news in this regard already. While this does not apply to spinning sensors, the MOEMS and Galvo-mirror based Lidar units have started to develop advanced dynamic resolution capabilities.
Last year I collaborated with two companies to define new scan patterns and optimized field-of-view. Some of this was demonstrated at CES earlier this year.
I hope to soon be able to do a deep-dive into this technology because I believe that, once it is ready for prime-time, it will be a game changer for the industry, and it will shake up who the main players in the market will be.
Stay tuned.