K40 gets some long overdue upgrades and testing

I have had some upgrades sitting on the shelf for a long time for the K40 laser cutter I got years ago. Some have been around since I originally got the machine, but I wanted to get all the life from the stock parts. I finally installed them.

The upgrades.

K40_overdue_updates_lenses
  • LightObjects Air assist head
  • 18mm ZnSe 50.8mm FL lens
  • fresh 20mm primary mirror
  • 5mp USB camera for Lightburn

Just a short list but they make a big difference over the stock 12mm lens and head. I got them installed along with a 5mp camera to pair with Lightburn to assist with object positioning.

If you are not running Lightburn software with your laser cutter. You should be. It is bar none the best with very responsive development team. I am a big supporter of open source, but pay for the license to support those guys. It is worth it, and my contribution via renewal barely buys them beers. You cannot go wrong for the price and what you get with a responsive development team. They read their forum and answer questions. The great folks at LightBurn also make their 3d model available for the 5mp camera holder, and will sell you one from their store if you don’t have a printer. It was a bit too tall for the LightObject Air assist head, so I had to go low profile with this model from Prusa Printers dot org. I’m still fiddling with the knobs on the calibration but visual location of cuts is going to be a game changer. I can cut small object from scraps now.

Testing the new Stuff

Prior to the upgrades my K40 had issues with living hinge cutting. It may have been my settings or focal length setting. However the new Air Assist head and Z-table combined to make a great hinge with minor scorching which was my big issue before. Charcoal does not flex well. Big shout out to DenzilMakes.com for the great test file box. You should check out his very nice work. There are lots of cool designs on the site. It made it look easy because he did all the hard work. There are even instructions on assembly available via his blog.

Sneak Peak for the future.

I did this long over due blog update while I was waiting for a fresh compile of OpenCV. I have been wanting to Open Computer Vision to convert stereo images to point clouds for a long while, for 3D modeling of real life. Well as a cheapskate, I could not justify buying a 3D scanner or Stereo Camera. What to do? Well check the parts drawer for what I have. A plain RaspberryPi zero and a RaspberryPi Zero W had been gifted to me years ago by a friend who wanted more power. I had some “Original” V1 camera modules to, well maybe they are knockoffs, but I cannot tell. The final physical piece was an OTG data transfer cable for cell phones. This will provide better resolution that my failed ESP32CAM attempt. I really should document failures too.

I got the basic Pi on the network by connecting it to the Pi W via the Ethernet gadget. I followed Adafruit’s guide for the most part. I had to do static IP addressing on the USB0 network on both of the Pies, and a little iptables forwarding. Note do not forget to enable forwarding in the /etc/sysconf.conf. I got everything working, after fighting with the camera cables a bit.

I then used Max Davaev of PiKVM’s great ustreamer to get the cameras accessible to the network for stream capture. This low latency streamer is just the ticket for the lightweight Pi’s, but full frame rate video may be a bit much for the available memory. Single image capture however is very responsive at full resolution.

I worked on my industrial design and worked on a FreeCad parametric model of a “case” for the things and it only took 3 revisions to get it usable. The cameras are 66mm apart or the normal human eye spacing, but the design will support up to 110mm for better stereo vision. I didn’t cut out the lens opening for the wider setting, but the design is parametric so one edit on the spreadsheet and a re-export and conversion of the STL into a SVG for the laser cutter. Props to tinker cad for the easy conversion from stl to svg via import and export.

I hope to update once I get some point clouds captured and converted to meshes. I hope to spawn the mesh processing onto the ARM BigLittle cluster which recently got a OS upgrade. end of word vomit… I saw a great makers site that is short sweet and too the point. I should really copy that style. “raw data for raw nerves