Linux on F4 and F7

stevestrong
Wed Nov 14, 2018 2:07 pm
https://elinux.org/STM32

Has anyone tried out?


ag123
Wed Nov 14, 2018 5:42 pm
nope havn’t try any of those, they may be based on uclinux my guess
http://www.uclinux.org/

racemaniac
Wed Nov 14, 2018 6:00 pm
What would be the use of doing that? if you’re going full OS & linux, why not immediately go raspberry pi

fpiSTM
Wed Nov 14, 2018 7:25 pm
Few years ago, I contribute to that (kernel part).
Don’t know the exact status now.
One other “OS” you can check is https://www.zephyrproject.org/
which support several arch including some STM32:
https://docs.zephyrproject.org/latest/b … index.html
And this one ;)
https://docs.zephyrproject.org/latest/b … n_dev.html

mrburnette
Wed Nov 14, 2018 9:25 pm
[racemaniac – Wed Nov 14, 2018 6:00 pm] –
What would be the use of doing that? if you’re going full OS & linux, why not immediately go raspberry pi

There are in this world folks that have an amazing amount of time on their hands: I am not one, but I am in awe of what can be done if one is just stubborn enough (with some creativity, too.) I think it kind-of cool.

It is surely just one of those, “Just because s/he can things…”

But, there “may” be a real use for such code – one does not always want to run a full OS. ST’s microcontrollers are powerful ARM units in their own right, so having an embedded OS could be (can be) useful for many projects.

Ray


C_D
Thu Dec 06, 2018 6:19 pm
The example that comes to mind for me is network routers and other network hardware running ultra slim linux OS’s like OpenWRT (or whatever its called now). That is an application that massively benefits from all the networking/usb/remote console stuff that you get with the linux kernel while running on hardware with a fraction of the complexity and cost of a Raspberry Pi or equivalent.

Obviously if you are on the hobby or small volume end of the spectrum you are probably much better off using more expensive hardware and saving yourself a heap of time, but I’m sure there will be some niches where a micro-linux device could be appropriate for the cost/space/power consumption advantages.


mrburnette
Fri Dec 07, 2018 1:51 am
[C_D – Thu Dec 06, 2018 6:19 pm] –
… with a fraction of the complexity and cost of a Raspberry Pi or equivalent.

Obviously if you are on the hobby or small volume end of the spectrum you are probably much better off using more expensive hardware and saving yourself a heap of time, but I’m sure there will be some niches where a micro-linux device could be appropriate for the cost/space/power consumption advantages.

Here in Atlanta, the RPi_Zero-W is regularly available for $5 and the RPi_3B for $29.99
In many circumstances, the Zero-W is all that is needed for a high-end embedded system where near-realtime I/O is not mandated by the application. Of course, RT kernels areally available to minimize latency.

I have worked rather extensively with the RPi’s over the past year and find these Linux system very capable. At $5, they are only 2.5 times the Blue Pill price but the workmanship of the board is significantly better. Programming in C is very easy as is access to the I/O lines.

Ray


ag123
Fri Dec 07, 2018 9:24 pm
the lack of mmu makes it hard to run ‘proper’ linux on stm32 f4 and f7, nevertheless i think there are niche use cases
but given that the pi (and clones) doesn’t cost that much more and often comes with at least hundreds of megs of ram and various are multi core, if one simply want linux and things pretty much run in ram it is probably simply easier using the pi* boards or even beagle* boards

i think the beagle bone black has an adc i think it is something like 8 or 10 bits though.
the successes for io intensive tasks on the pi* boards vs mcu (e.g. stm32 Fx boards) is varied.

most of the 3d printing firmware e.g. marlin are based on conventional mcus
http://marlinfw.org/
a most successful 3d printing ‘firmware’ on the cortex-a chips i think is the beagle bone black + replicape
https://www.thing-printer.com/
the original pi design is simply too io deprived for this purpose
and that the beagle bone black is based on TI A335x
https://beagleboard.org/black
which has a dedicated co-processor (PRU) that deal with IO intensive real time tasks

the pi* seem to be too io deprived to deal with io intensive real time tasks like 3d printing
in the first place pi* original connector is 40 pins with some 8 pins going to the ground

the trouble with io is that sometimes you go into a loop and poll a dozen io in a tight loop to react to the actual changes and maintain state in line with a real world environment. multi-tasking os and cpus often simply take the liberty to context switch with little regards to whether a particular pin or io needs to be polled at a correct interval


mrburnette
Sat Dec 08, 2018 1:15 am
[ag123 – Fri Dec 07, 2018 9:24 pm] –

the trouble with io is that sometimes you go into a loop and poll a dozen io in a tight loop to react to the actual changes and maintain state in line with a real world environment. multi-tasking os and cpus often simply take the liberty to context switch with little regards to whether a particular pin or io needs to be polled at a correct interval

Unfortunately you are correct.

But, it is because the software designer elected to architect using polling. Before uC’s became so fast, polling across numerous inputs would not have been possible using the host processor – custom logic would have handled the numerous input signals and the processor would have gotten a NMI to react.

Polling is possible because the current uC’s are darn fast. Arduino trained developers are indoctrinated into polling because of the loop() default structure. A classically trained embedded software engineer would likely approach the problem differently; perhaps using an FPLA or external glue logic.

I often talk about the “design budget” which implies the programmer understands the application resource utilization: both in cpu cycle-timing and in SRAM and flash profiling. Most of the time all I see is a blank stare from my audience – as if they had never given any serious thought to such concerns. Unfortunately, programmers often think their job is just programming… solve a problem. Often such naivety creates down-stream serious problems when the evolving software and selected hardware do not perform correctly due to some unforseen constraints.

Software design is more complex than writing code; it is writing code to enable the selected hardware to perform correctly (thus) to enable a desired functionality in a closed system. Open-system programming is a whole different subject.

Ray

Added: https://www.pcmag.com/encyclopedia/term … sed-system
A (closed) system in which the specifications are kept secret to prevent interference from third parties. It inhibits third-party software from being installed; it keeps third-party hardware from interoperating with it, and it prevents third-party enhancements from improving the product. Contrast with open system.


ag123
Sat Dec 08, 2018 7:05 am
i’m not familar with FPGAs, CLPDs etc
but if this project is a guide
viewtopic.php?f=30&p=51187
https://www.stm32duino.com/viewtopic.php?f=30&t=2716

one of those ways the cortex-a * boards can cope with the io handicap is to use an FPGA, CLPD as io expanders
https://www.intel.com/content/www/us/en … 05644.html
i’m not too sure which FPGA or CLPD has built-in ADC, DAC though.
this development may mirror conventional intel* like cpu design where you have a ‘north bridge’ with a high speed data bus, these days those are often in the GBps and a ‘south bridge’ for ‘slow’ io
i think even if FPGA and CLPDs with built-in ADC, DAC blocks exists, today they are probably expensive due to the low production rates
and i think the ‘common hack’ is to use a ‘slow’ mcu like the cortex-m with bundled adc, dac to pair up with cortex-a to do the io intensive tasks (what an abuse of cortex-m) :P :lol:

i’ve seen some articles on FPGAs when combined with fast parallel ADCs achieves those staggering sample rates of > 1 G samples per sec
it remains a ‘dream’ for me but i think those are probably how the fast oscilloscopes are made
the FPGAs and fast ADCs are expensive and i doubt i could handle the signal rates well as an amateur,
a finger touch could probably capacitively ‘short’ the signals to ground at those rates
my ‘fastest’ ‘getto’ oscilloscope is the stm32 F303 nucleo which is capable of say some 18-20 msps quad interleaved
i’m yet to make that ‘project’ work for myself, but that is certainly low cost and rather useful as a stand in oscilloscope vs none
———
polling:
polling unfortunately is possibly a ‘cheap(est) & dirty’ way to do many of the io these days though especially when dealing with *analog* stuff
it used to be op amp / caps etc as filters, these days it seemed the fad is just sample it (adc)
do an FIR filter and there you get your signal, no hardware
:lol:

in terms of polling i used an event loop to do the ‘async’ tasks that is time insensitive e.g. ‘co-operative multitasking’ like blinking a led and doing something else
viewtopic.php?f=18&t=4299
and use a timer for the ‘real time’ stuff, that apparently works pretty well on stm32 but that this is limited by how fast the timer interrupts and adcs can turn around, this puts some i think rather low upper limits on the sample rates with this approach
a nice thing about stm32 is that it is ‘stuffed with timers’ many of them
:lol:


Squonk42
Sat Dec 08, 2018 2:21 pm
To answer the original post: no, I don’t and I don’t think it is a good idea to spend your time running an OS like Linux on a CPU without MMU.

I did that 10 years ago, and I don’t recommend it at all: total waste of resource for low-level and/or real-time applications, porting application using fork() is not possible (i.e. almost all of them), and problems are difficult to spot because of the lack of isolation between processes.

Given the price of CPUs handling virtual memory, don’t waste your time and money, and there is not much experience to get from solving these kind of problems that have been solved a long time ago, either using an MMU or just well-crafted embedded software design.

For this point, and from my 38 year-old experience in software/hardware development, I completely agree with this 2 posts from Ray:
viewtopic.php?f=13&t=4366&p=51179#p51179
viewtopic.php?f=13&t=4366&p=51179#p51189

It is not a desire to wring every cycle out of a uC. Rather, the goal is to create functioning software to run as the requirements demand. I find it helps to write the requirements on paper and refer to the goal often.

I would just correct it to: “combine functioning software and hardware to run as the requirements demand, with the tightest budget and power”.

To handle properly real-time concurrent task, I found out that one of the best solution is to use non-blocking cooperative finite-state automatons to handle the job, in a protothread fashion.

From a purely theoretical point of view, a Moore/Mealy state machine is much simpler than a slightly more complex push-back automaton (use only when required) or a full-fledged spaghetti-plate program than can only be modeled using a Universal Turing machine, and thus can be validated completely.

And BTW, protothreads are much lighter than Posix threads or tasks.


Leave a Reply

Your email address will not be published. Required fields are marked *