Initially this will be based on the BARR Group Embedded C Coding Standard with some C++ carve outs.
.astylerc added and code bulk converted September 27, 2024
-- still need the manual conversions.
Added LsCs-Library.conf for proper SONAME linking. Also fixed architectural flaw with plugin loading. Added lscs.conf to share directory so developers can "just us it." Tools can now run from anywhere.
This project could be done at any time other than when we are mass-changing the coding style. It is mostly stand alone.
Some fool put a keystroke timer on the combobox. If you are typing roughly 60 words per minute it will leave all text in the edit control and update the list displayed below. A keystroke landing outside of that timer clears the field putting that character as the only entry. This doesn't work in the embedded systems world and it certainly doesn't work in the 2-fingered typist world stopping to look in the list for what they want. Quite possibly the timer should just be nuked. It only makes sense when fully trained data entry people are monotonously using a screen.
This is a legacy Qt bug that didn't get "fixed" until around the time Qt 6 came out. Basically objects created in other threads end up with queued connections across threads. The trouble is the disconnect is also queued instead of immediate execute. What happens is say, thread 12, deletes an object that has connections to many other threads. The disconnect gets placed in the event queue where the single-thread-y-ness of the main event loop will eventually get around to it. Between that actually happening and the initial deletion, one of the other threads tries to get data out of the now deleted thing. Access violation crash.
As unforgivable as it was, CopperSpice never included any version of QSerialPort.While the knee-jerk response would be "find the latest LGPL V2.1 version and pull it in," there is a reason "jerk" is in the name.
Qt is a single threaded API. That singular thread is the Main Event Loop which must run in the primary thread and that thread is where all graphics must be done. Yes, it has QThread but that is a QObject derived class and as such many of its signals and slots must be serviced by the single thread.
Over reliance on the main event loop dramatically limits throughput. Even if you have a 20-core box, the bulk of your processing must happen in a single core because a thread is basically locked to a single core. Yes, it can be swapped out or otherwise moved to a different core, but it runs on one core. As a general rule a thread does not distribute its work across all 20-core. Under most current operating systems when a program/thread creates another thread that thread will attempt to use the exact same core because the affinity will be set to the current core.
At any rate, the single thread-y-ness of Qt is one of the reasons the serial port stuff kind of sucked. Most of the examples are all executing in the main event loop. Hey, for 1200 BAUD you can make that work on an i-5 gen-3 machine. Today we have 16-port cards capable of 460.8K bps. Some 2-port cards are 921.6 kbps. There is even a 16-port that can run 921.6 kbps on every port. While we will be married to the main event loop for some signals and slots, every serial port object must create its own unique thread which may or may not use QThread.
A project I have contributed code to in the past is CppLinuxSerial. It is a simple elegant code base. Should be easily extended/mirrored to support Windows native (no MSys2 or MinGW stuff). Most likely will work with OpenBSD and probably Mac but I will never again own a Mac/apple product.
Need to support RS-232 422/485. Future enhancements add methods for X, Y, and ZModem file transfer. Do we need to also support Kermit?
While we may no longer need the ring buffers that we had to use with GreenLeaf CommLib under MS DOS 640K memory limit, we definitely need some kind of cap to protect newbs from themselves. The ring buffer was also how they could implement packet level logic. We need the option of configuring:
You do this to limit your throttling by the single threaded main event loop. If your thread has a read (or write) buffer you can watch for the packet and only signal when there is a full packet.
While most of you reading this haven't seen a parallel port in years they are still big, just not in the consumer market. Data acquisition and industrial controls is where you find a lot of these.
Usually you find the Opto22 (or other brand) rack inside a Nema 4 enclosure connected to the PC via a 50-pin header cable to a card like the PIO-48 shown on the left. Each relay on the card is optically isolated. Each relay can be independently wired for either AC or DC power (you just have to use the correct relay.) Yard arms, lights, augers, conveyor belts, imagination is your only limit. I set landfills and transfer stations up with these things, but you could easily run most of an oil refinery or grain mill from one desktop.
Here is where the academics in charge of C++ coding standards really screwed the pooch mandating all integers be stored in two's complement.
A port that is 6-bytes wide where every pin is wired through to the rack. (The extra 2 pins are usually grounds.) You read an 8-bit byte from the port, set the bit you wish to change, then write the byte back out. 1 turns the relay on and 0 turns the relay off, but you have to know which bit! In the world of MS-DOS and that GUI-DOS called Windows C code and many libraries used unsigned char and unsigned short int interchangeably. They were always the same. That's really bad now.
unsigned char uc = 'A';
unsigned short int usi = uc;
What value is now in usi? In the correct world it is Decimal 65, or 0x41, or 0b01000001; Now, with the standards butchery, it's anybody's guess. Make no mistake, guess is what it will be. If the highly misguided and incorrect C++20 standard is enforced the assignment of uc to usi should force a two's complement conversion when compiled with C++ compiler. Not so when compiled with C. We now have a breakage of fundamental operations that have worked since the 1970s. To the never-went-to-college crowd as well as the never-worked-a-day-in-the-real-world college crowd, this is trivial and irrelevant. To the person standing under a valve that is about to dump molten steel on them because the wrong bit was set, it's a death sentence. All in the name of misguided academic purity.
One of the many things liked about NanoGUI was the bundled fonts. They were Font-Awesome fonts. If we bring in the entire Font-Awesome we also get a ton of svg files that can be used for buttons. A quick scan didn't show any medical device type images, but we should be able to create and contribute back. You can see an example screen of NanoGUI here. The fonts are serviceable for screen use, assuming they support enough languages. We should probably check out the license for Entypo which NanoGUI used to use.
We need:
We will start small with what we have once Font-Awesome is brought in. If you've never worked on a medical device or embedded system project, you don't know what a landmine this is. Some designers fall in love with Tahoma thinking it is free because it is in Windows. When the device is almost finished is the time they find out Microsoft want's a million bucks to license it. We need a group of fonts that are easily read on a six inch screen from 5 feet away. Why? The patient monitor will be on the other side of the patient above and behind someone else in an emergency situation. The medical professionals don't have time to guess. If you are the patient, you don't want them guessing.
The fonts need to be bundled so they are available in the form and theme designers.
Nuke Harfbuz.
Many of us got our first exposure to Qt on OS/2 in or around 1987. A token few got our first exposure with SuSE Linux back when it shipped on a box of floppy disks. Even if that doesn't include you it should make it obvious hardware and graphics have changed a lot since IBM released the Personal System/2. Neither OS/2 nor GUI DOS (Windows 3.1 - it wasn't an Operating System, just a task switching GUI on top of DOS) are around anymore. If you lived through this era you know that everybody had to roll their own. As a result there is a lot of legacy baggage. This library is much larger than it needs to be and so is the list of build dependencies.
Linux is dropping support for X11. Most new Linux distros are using Wayland and provide a buggy XWayland server/interface for "backward compatability." CopperSpice has a roll-your-own CsVulkan library for Vulkan support. We inherited XCB and OpenGL plugins as well. All must go! or maybe most all.
A few years ago the founder of this project forked NanoGUI for an embedded system project. There were some issues (which might now have been resolved) with that nice minimalistic GUI related to some hack done for Apple support. The fork is known as WaylandGUI and has made it into a few medical devices since. The project will make a decision as to using just GLFW or using both GLES 3 as was done there.
We will not support phones. If something happens to run on phones, fine. The real question here is "are there any medical devices risking human life with Android?" The second question would be do we have any need to provide 3D graphics?
Someone needs to dig back into the WaylandGUI project on ArmV8 platform and drill down to see what the minimum is we need.
This project will target only Wayland and Vulkan. If something happens to work on something else, so be it. If we stay with GLFW we should be good going forward using the distro/OS provided libraries and not have to maintain a bunch of buggy problematic code. If there is actually someone developing for Apple and GLFW seems to fall short I'm not opposed to pulling in the not-maintained nanoVG if the project deems that necessary.
We do not make changes to support phones under any circumstance.
Too many graphics libraries ASS-U-ME you have massive GPU farms to use when rendering images. Qt has a dynamic image cache that uses some age/access algorithm to purge lesser used items. Battery operated embedded systems trying to get 10 or more days of run time on a charge use slow low powered memory and don't have a GPU. Dynamic memory allocation Duth Sucketh. As a result people using Qt, CopperSpice, or any other C++ GUI library built on this assumption get almost no benefit from the GUI. They have to roll their own widgets because they have to pre-load all images into a persistent cache then override the paint method much like this.
So, we will allow this as a run-time option. A persistent user filled user controlled cache. Entries in it will not be deleted until the application is deleted or the user makes a call to deliberately re-use them. If the application run-time flag is not set all image/paint happens as it does now.
Ideally there will be a persistent image cache class where users can just feed in images either from url or local files. Ultimately we need to be able to support and SQLite image database as an input source for the cache, possibly as the cache itself. Do not forget that SQLite databases can be in-memory.
Mostly this is about getting the LsCs-platform-build-dependencies.sh and build-LsCs-package.sh scripts working. Also getting any CMake and local-build.sh tweaks. Most Yocto built operating systems for Arm are Debian based. We want to be able to cross compile without using containers and also provide instructions for building within a container. Until we have migrated to GLFW for native Wayland support, an Arm V8 build does not make sense.
This was originally a step called "Rip out all remaining Qt Declarative and QML supporting code" but it appears the Webkit in CopperSpice is the same horse and buggy era Webkit from Qt 4.x days. Google spies on everybody so using their Blink stuff like Qt is doing now is just insane. One person couldn't ever rip out all of the spyware. Servo is the new generation. It provides both WebGL and WebGPU and is being developed with embedded applications in mind. This project cares about embedded, in particular embedded for medical devices. Having WebGPU access will help with graphing and medical images later on.
This project could be done at any time other than when we are mass-changing the coding style. While it may impact a small set of classes, it is mostly stand alone.
QStyle has always been six inches short of complete tragedy. One of the few places you can find documented examples starts on page 439 of ISBN-13 978-0-13-235416-5. Then they brought in style sheets. The library ended up with a hodge-podge of styling. Most developers ASS-U-ME you will be on a desktop with some kind of OS/Desktop provided style to inherit from. That doesn't work in embedded systems when you have a Westin compositor for Wayland or old fashioned X11 with nothing else loaded. Adding insult to injury is that tweaking a style to change how, say a combo box looks, will cascade out and trash other widgets.
Admit it embedded systems developers. How many times have you had to code and test custom widgets all because you couldn't set a theme and a style?
If you had a GUI tool that would let you define a "theme" colors and fonts and a "style"and generate the code so all you had to do was include the header and source in your build to select the style and theme you would be done. Instead we get cool looking screen/UI designs from the UI artists and spend roughly a third of the project creating custom widgets because we cannot safely change how buttons, sliders, you know, the common widgets, look. When you change something on the slider it pooches the slider in the combo box that wasn't supposed to change.
To fully understand this requirement, install Manjaro Cinnamon on a machine or VM. See just how much you can change with themes, styles, buttons, checkboxes, etc. We need a GUI tool that shows every standard widget at once. Allows user to define theme, style, etc. values and see the changes real-time. The combo box slider must not re-use the general slider. Neither shall the window/dialogs. Each one must be independently customized. User must be able to assign unique names to the themes, styles, etc. and the grouping of them.
These settings must be stored in a not-XML (preferably SQLite) file. A second tool is used as part of the CMake process. It generates C++ header and class files.
Eventually, instead of hard coded partially filled in styles ASS-U-ME-ing it can inherit from a desktop we will include SQLite database(s) of styles, themes, etc.
Tool must allow user to Save-As or otherwise copy between things already done. Import/Export need to exist as well.
We may end up having to keep some form of the QStyle class hierarchy, but it will not be the train wreck it is today.
Yes, we need a Forms Creator. No, we don't want to port QDesign or any flavor of it. We will never store screens/forms in XML! If you have never had to open your .UI file in a basic nothing editor like LeafPad to fix the corrupted XML, then you never actually worked with Qt 4.x. UI files from that era are still floating around and jacking things up in current projects. Yes, back then everyone but me thought XML was going to end global hunger and bring world peace. As I write this there are at least two wars going on and famine in multiple countries. So much for that line of thought.
Screens have hierarchies
(usually) and people wish to re-use pieces/parts. XML was simply incapable
of that once you got beyond the basics or made a few changes.
Best to wait until after
"Goodbye Cumbersome Buggy QStyle" is done. Leverage that. Same storage
technology. Directly generate code from the stored data.
I have never used it, but read up on the Altia GUI Editor. That is what we are shooting for. Ultimately we should be able to:
Don't know about code in
this UI Editor. That would have source code in a data storage thing that
then gets spewed out into the generated code. If it fails a code review
one would have to know "where" it needed to be fixed.
This is a massive project that needs to be done stand alone and flash-cut in. No way to piece meal it.
Many Academic decisions were made in the CopperSpice project. One of them was the elimination of Copy on Write (CoW) which made Qt so very fast in the days of single core CPUs. CoW also made using exceptions nearly impossible. At the point you were unwinding the stack for an exception the string didn't actually have its value and your ability to get it restricted inside of exception handler. This fundamental change is what turned a simple piece of code to interpret GDB data from near instantaneous execution to "16 Minutes to Build a QList." Making QString synonymous with UTF-8 also needs to be addressed, especially since QChar is defined to be QChar32.
The project needs to look into a new string class. This one is just a shell/wrapper. If the string assigned to it actually exists then it contains a QStringView until a modification of said string requires an actual copy. Destruction of the string a view is looking at needs to force a copy as well. When creating an actual string it may default to UTF-8, UTF-16, or UTF-32.
Project needs to decide how small of a target this library will support. The UTF-8 decision has been made on several platforms and it has issues. Most notably the multi-byte indicator and the fact most libraries ASS-U-ME it is only a 2-byte character but a UTF-32 could need all 4 bytes.
UTF-8 did not exist when Windows NT was created so it uses UTF-16LE (Little Endian). The reason so many went to UTF-8 despite all its issues is they could control byte order. Network byte order, however, is Big Endian. Yes, there is UTF-16BE and UTF-32BE to go with UTF-32LE. If you are working on a codepage that starts after the end of 8-bit ASCII, UTF-8 is really inefficient because it consumes an extra byte and requires additional processing for each code point.
Project needs to decide what encoding to use internally and if we need to allow forced conversion for stream or file I/O. Always using UTF-32 has advantages, but is a memory pig if almost your entire code set fits inside original ASCII. In either case we have to deal with Endian.
Create a Flatpak runtime that is actually in the Flathub hosted runtimes, based on Freedesktop, including this library. Include a corresponding SDK for building. One of the few reasons Qt hasn't completely disappeared is the fact KDE has a runtime in Flathub.
Every graphing package I've ever used for Qt itself sucks. They all have a home hobbyist mentality and they all want 100% of the data in RAM. When you are graphing a section of a database that could have a trillion rows that occupies a 20TB hard drive this is not possible. We need a database aware graphing package that operates on a cursor to create things like these.
Our graphing package needs to know how to use a database, have a timer for refresh, repaint only the necessary stuff, obtain its limits via separate SQL/Database I/O (for NoSQL databases or indexed files). For graphs like the above we probably need 3D support. We do not want a train wreck like Qt has for 3D support. This graphing portion of the library needs to support creating every graph in the following images without the user having to create custom code.
They should be able to configure color, shape, etc. Even the graph for the battery indicator and the vertical bar for SpO2 should be doable from the graphing classes without having to hand create a custom object.
Graphing classes need to be live-stream aware as well as database aware. When source is live-stream the amount of history kept in RAM must be configurable. It should also allow for a hook to call a storage routine for whatever the storage may be. When graphing medical data it is important to graph then record. If the store isn't internal to the device there can be a delay and if a patient is coding that can lead to a medical error with "adverse outcome."
In addition to the basic graphics classes we need DICOM graphics classes. When it comes to DICOM, it is a standard and you just can't "wing it" like an Agile shop.
While the quality of the display hardware will vary, the data will be exactly the same on all systems.It's really data too, not like a PNG or JPG file. Here is an NIH publication with an overview.
This is the ultimate goal. We have to do everything above to get here. More and more medical devices are being required to display the image real-time on the device. The days of printing an X-ray and hanging it on a box with a light in it to read are behind us in most first world countries. That tech "works" if you have a qualified human at the top of their game interpreting them. To do better diagnostics you need a data mage software can search for specific values to cleanly flag a tumor, clot, whatever.
It's not a priority. Not on a wish-list really.At some point we might want to look into WebAssembly and what that would take. The focus is desktops and medical devices, but more and more medical devices are connecting to the Internet and some will be remotely operated. The secure way to do that is have your own program running on the remote machine with a proprietary encrypted messaging protocol. CEOs will want it on "the cloud." They have no idea what "the cloud" is and ignore the fact nothing on "the cloud" is ever really secure. Just cross your fingers, hold your breath, and pray someone bought enough liability insurance.
It is the severe hope of the founder of this project that something better is available by then. JavaScript and other Web technologies are neither typesafe nor secure.