is there any way to overclock the speed of the CPU at Runtime , but not with fullspeed 128Mhz , but somewhere around like 90Mhz.
If you look at the places where the F_CPU definition is used
https://github.com/rogerclarkmelbourne/ … _cpu&type=
It’s used in multiple places
All of those would need to be changed to use a variable rather than a definition, and some macros changed to functions etc
Additionally, parts of the system startup code e.g. places that set the hardware clock PLL would need to be run, and I susoect the startup code would need to be broken into smaller modules, or you would need to duplicate some code
It’s an interesting project if you want to take it on, as you could also integrate stopping and restarting the USB, so the processor could switch to overclocked for a short period, to do intensive processing, then switch back to 72MHz to run USB
Or, it may be possible to run at 128MHz for a few milliseconds without impacting the USB at all.
Please bear in mind, that at 96Mhz the USB will not work. It only works at 72MHz and 48MHz as this is a limitation in the MCU internal hardware
[RogerClark – Thu Nov 23, 2017 8:13 pm] –
Or, it may be possible to run at 128MHz for a few milliseconds without impacting the USB at all.
Thanks Roger!!
Any code snippet for that??
Instead I’m in doubt with:
#define SYSTICK_RELOAD_VAL (F_CPU/1000) - 1[alexandros – Fri Nov 24, 2017 8:20 am] –[RogerClark – Thu Nov 23, 2017 8:13 pm] –
Or, it may be possible to run at 128MHz for a few milliseconds without impacting the USB at all.Thanks Roger!!
Any code snippet for that??
No. I’m not aware of any code to change the clock speed during at runtime.
I did it on F103 in past (with mecrisp forth) and it worked perfectly. I was changing on-the-fly the clock in a loop in a sequence like 72MHz/8MHz/128MHz/48MHz without a crash. Mind the clock’s PLL setting (the phase loop needs to lock itself at the new freq) takes some time thus there always will be a small “time gap” while the PLL locks.
You have to take the clock setting routine from the core, and you want to add ticks and baudrate settings change such you stay consistent with ms/micros and uart speed. I did not test with USB as it is supported by a few clock freqs only.
void set_clock(uint32_t clk_khz, uint32_t baudrate) {
set_cpu_clk(clk_khz); // see core
set_ticks(clk_khz); // see core
set_baudrate(baudrate); // see core
}
..
#define baudrate 115200;
set_clock(8000, baudrate);
set_clock(128000, baudrate);
set_clock(48000, baudrate);
set_clock(72000, baudrate);
set_clock(96000, baudrate);
..
Energy?? You may run BluePill at 96MHz easily and go sleep when needed.
The total energy spent might be much lower than when switching between various freqs.. ![]()
Or do you want to have the USB available (ie at 48/72MHz) and only switch to 96/128MHz in order to speed up some calcs??
[Pito – Fri Nov 24, 2017 11:57 am] –
The big Q is why do you need such on-the-fly clock switch.Energy?? You may run BluePill at 96MHz easily and go sleep when needed.
The total energy spent might be much lower than when switching between various freqs..Or you do want to have USB available (ie at 48/72MHz) and switch to 96/128MHz in order to speed up some calcs??
i just want to test if i get good results in specific parts of the code with 96MHZ and the rest with 72
i notice thought a temperature raise in CPU when overclocking OK thats normal
[Pito – Fri Nov 24, 2017 11:19 am] –
You can change the cpu clock at runtime with F103.
I did it on F103 in past (with mecrisp forth) and it worked perfectly. I was changing on-the-fly the clock in a loop in a sequence like 72MHz/8MHz/128MHz/48MHz without a crash. Mind the clock’s PLL setting (the phase loop needs to lock itself at the new freq) takes some time thus there always will be a small “time gap” while the PLL locks.
You have to take the clock setting routine from the core, and you want to add ticks and baudrate settings change such you stay consistent with ms/micros and uart speed. I did not test with USB as it is supported by a few clock freqs only.
void set_clock(uint32_t clk_khz, uint32_t baudrate) {
set_cpu_clk(clk_khz); // see core
set_ticks(clk_khz); // see core
set_baudrate(baudrate); // see core
}
..
#define baudrate 115200;
set_clock(8000, baudrate);
set_clock(128000, baudrate);
set_clock(48000, baudrate);
set_clock(72000, baudrate);
set_clock(96000, baudrate);
..
You have to open the libmaple core files and look for the functions (or the related parts of the functions which handle the settings). I think all the stuff is there (must be there of course
Then put the stuff into those above functions. An easy DIY exercise
[Pito – Fri Nov 24, 2017 12:17 pm] –
There is not such a library.
You have to open the libmaple core files and look for the functions (or the related parts of the functions which handle the settings). I think all stuff is there (must be there of course).
Then put them into those above functions. An easy DIY exercise![]()
yea i just realise it!
thanks! ![]()
rcc.c
systick.c
usart.c The code may be in some old post. What I remember is that I had to change the clock source to HSI, then change the PLL settings, and change back to HSE. Did work fine for both, but USB only worked with at 48Mhz and not 96 as expected. I did not care about systick or anything else, but as long as you take care of those things, you should be good.
I would think the best approach would be to change F_CPU to an variable rather than a macro, and change it accordingly when you change speed, and if you are using any peripheral with speed settings, re-initialize them. May be a bit of work to modify all that’s needed, but should be perfectly possible.
As Roger, Ray, and others I can’t remember have said before, it seems to be better to run the CPU at the max speed you need, and then skeep often, rather than speed up and slow down, as far as perfomance per watt of energy used.
[Pito – Fri Nov 24, 2017 11:19 am] –
You have to take the clock setting routine from the core,
Uh, wich core? Arduino_STM32 doesn’t have any set_cpu_clk, set_ticks or set_baudrate. I used the search in GitHub
rcc.c is in
Arduino_STM32/STM32F1/cores/maple/libmaple/rcc_f1.c
and I found this
__deprecated
void rcc_clk_init(rcc_sysclk_src sysclk_src,
rcc_pllsrc pll_src,
rcc_pll_multiplier pll_mul);
All cores we have got include routines/functions/methods for setting up the CPU clock, SysTick reload value and Uart Baudrate settings..
..
set_clock(128000, 460800);
// do math, send/receive big data..
set_clock(38, 300);
// scan sensors, buttons, write slow data to SPI flash
// or onto Sdcard, send/receive slow telemetry
set_clock(128000, 460800);
// do math, send/receive big data..
..
Did you notice if the USB stops working if you switch to 96 MHz for a short time ?
It normally takes the PC host quite a long time to notice the USB device is not working,
So I wonder if switching to 120 MHz for 100ms would kill the USB?
Thats an interesting suggestion. e.g. change to 72Mz in the USB ISR
Unfortunately I don’t know enough about USB to know quite when data is clocked in and out etc
[ahull – Sat Nov 25, 2017 2:32 am] –
It seems USB supports suspending.. so you could in theory, boot, enumerate USB, suspend, run at crazy speed, go back to normal speed, un-suspend, say something sane on USB, rinse, repeat.
It seems to imply in that doc that the max time while suspended is 10mS and this is just because the Host observes the client has not responded
Suspended – When no traffic is observed on the bus for a period of 1 millisecond, a USB device enters this state, characterized by its low power consumption. The device’s address and configuration settings are maintained while suspended. A device exits the suspended state as soon as it begins seeing bus activity again. The host is expected to allow 10 milliseconds before expecting the device to respond to data transfers after resume.
who initialises the ‘resume’ ?
does the host poll in some way poll the USB ?
what does host do after 10mS ?
target as it switches to ‘normal’ speed and then it has 10mS to be able to respond on USB sensibly ?
srp
I did not use the usb while messing with switching the cpuclock under forth..
Best design practice is to set your clock, run, and sleep, if required… (save microWatts & miniWatts for a better World)
Performance (increase) is often the reason programmers state for overclocking. In my prior business roles, I have been known to kick a programmers butt for such statements (metaphorically as I do not promote the use of workplace violence to increase employee productivity) …
Likely the true issue for the increased hardware boost is poor programming choices by the programmer. This is the Microsoft view: … runs slow because the hardware is slow. I love Microsoft because they cause businesses to dump $Billions in perfectly good h/w into the resale market so we Linux dudes can give it a long second life.
1. Understand your program intent
2. Profile your functions and loops (understand your code and where resources are consumed)
3. Unroll silly stuff that causes C++ to barf (the compiler may not do this for you)
4. Learn to write solid, efficient code (do not be stupid on purpose)
Ray
[mrburnette – Sun Nov 26, 2017 4:40 pm] –
All this talk of realtime “overclocking” is academic IMO. It is an interesting rainy day diversion, but just because something can be done does not mean it should be (done.)Best design practice is to set your clock, run, and sleep, if required… (save microWatts & miniWatts for a better World)
Performance (increase) is often the reason programmers state for overclocking. In my prior business roles, I have been known to kick a programmers butt for such statements (metaphorically as I do not promote the use of workplace violence to increase employee productivity) …
Likely the true issue for the increased hardware boost is poor programming choices by the programmer. This is the Microsoft view: … runs slow because the hardware is slow. I love Microsoft because they cause businesses to dump $Billions in perfectly good h/w into the resale market so we Linux dudes can give it a long second life.1. Understand your program intent
2. Profile your functions and loops (understand your code and where resources are consumed)
3. Unroll silly stuff that causes C++ to barf (the compiler may not do this for you)
4. Learn to write solid, efficient code (do not be stupid on purpose)Ray
How can i disagree , very well said
[mrburnette – Sun Nov 26, 2017 4:40 pm] – “do not be stupid on purpose”
My wife has have been telling me that for years, so it must be true… ![]()
There are also possibly ways to hack the performance without changing the clock speed, e.g. try changing the flash wait states (Someone may already have tried doing this)
[RogerClark – Mon Nov 27, 2017 10:01 am] –
I quite like an academic challenge, but in this case I simply don’t have the time <…>
Time: that is rather the point Roger, is it not?
We engineers here all know the school-taught methodology around diminishing return. I love an academic question, too, but even we retirees have time constraints with our play time. Academic questions seem to stimulate less personal drive for answers now that I am old verses back when I was young and having to “keep up my (expertise) appearances.”
Some young_gun working out in the industry with access to a well equipped lab could grind out the results during her lunch period. Me? Well, I have been working on my morning coffee for an hour already and lab time today is already allocated (I purchased a couple of Amazon Dots over the Thanksgiving holiday) and I want to do a bit of network sniffing.
No commercial engineer would go to production using an STM32 device configured outside of published “best practices” if for no reason other than product liability. Academic answers then simply become answers that are interesting but not necessarily valuable in real world uses. In my mind, it is almost a waste of valuable time.
Ray



