Skip to content

Invalid interaction between the RTC and the I2C drivers for the sam0 #21114

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
sslupsky opened this issue Dec 2, 2019 · 14 comments
Closed

Invalid interaction between the RTC and the I2C drivers for the sam0 #21114

sslupsky opened this issue Dec 2, 2019 · 14 comments
Assignees
Labels
area: Timer Timer bug The issue is a bug, or the PR is fixing a bug platform: Microchip SAM Microchip SAM Platform (formerly Atmel SAM) priority: low Low impact/importance bug Stale

Comments

@sslupsky
Copy link
Contributor

sslupsky commented Dec 2, 2019

Describe the bug
I am not sure if this is a bug yet. My observation has been that when a kernel timer is used and the soc device has power management enabled (CONFIG_SYS_POWER_MANAGEMENT and CONFIG_SYS_POWER_SLEEP_STATES) the period of time elapsed between successive expiry of the timer can vary significantly. For example, I have five kernel timers configured as described below with a period of 10 seconds. I have observed the time period between execution of the work function to be between about 2 seconds and 18 seconds when the sleep states are enabled for a SAMD21 SOC.

If I limit the number of running timers to 1, the time period between successive expiry appears to be reasonably consistent. As soon as I introduce one or more additional timers, the period is erratic.

I have also noticed erratic power consumption. I'll detail that in another issue.

To Reproduce
Steps to reproduce the behavior:

  1. start with Blinky
  2. add timers described below
  3. add system power management and sleep states described in issue i2c-sam0 sleeps waiting for interrupt #21092
  4. See error described above
#define SLEEP_TIME        1000
#define BPS_SAMPLE_PERIOD 10000

static void bps() {
	struct device *dev = device_get_binding(BPS_DEVICE_NAME);
    struct sensor_value temp, press;
    int rc;

    if ( dev ) {
        sensor_sample_fetch(dev);
        sensor_channel_get(dev, SENSOR_CHAN_AMBIENT_TEMP, &temp);
        sensor_channel_get(dev, SENSOR_CHAN_PRESS, &press);

        printk("temp: %d.%06d, press: %d.%06d\n",
              temp.val1, temp.val2, press.val1, press.val2);

    }
}

static void bps_work_function(struct k_work *work) {
    bps();
}

static void bps_timer_handler(struct k_timer *timer_id) {
    k_work_submit(&bps_work);
}

K_TIMER_DEFINE(ths_timer, ths_timer_handler, NULL);

void main(void) {
    //  bps timer
    k_timer_start(&bps_timer, BPS_SAMPLE_PERIOD, BPS_SAMPLE_PERIOD);
    k_work_init(&bps_work, bps_work_function);
    while (1) {
        k_sleep(SLEEP_TIME);
    }
}

Expected behavior
Consistent time period between successive expiry of the timer.

Impact
Variance in time period is excessive and unreliable

@sslupsky sslupsky added the bug The issue is a bug, or the PR is fixing a bug label Dec 2, 2019
@andrewboie andrewboie added the area: Timer Timer label Dec 3, 2019
@dleach02 dleach02 added the priority: medium Medium impact/importance bug label Dec 3, 2019
@andyross
Copy link
Collaborator

andyross commented Dec 3, 2019

For clarity: if CONFIG_SYS_POWER_MANAGEMENT is not configured, and no other changes are made to the kconfig, the symptoms go away?

And what features are you looking at getting from the PM settings? I'll admit I don't know the atmel hardware well, but a quick grep in the tree seems to be telling me that there is no soc-specific PM support for atmel_sam0 at all.

The linked bug in #21092 has what I'm guessing is your own work on a PM driver? I do note that in DEEP_SLEEP_2 state, you're disabling the SysTick interrupt. Well... that's the timer device. If you disable that, you won't get timer interrupts and will end up waking up for some random reason later and at some random phase of the timer counter (which is 24 bit, and wraps rapidly). Are you sure this isn't what you're seeing?

@sslupsky
Copy link
Contributor Author

sslupsky commented Dec 3, 2019

The sleep states described in #21092 are what I am using for SAM0 power management. I forgot to mention that I disabled the SysTick and replaced it with the RTC following some of of the work I found in the Zephyr codebase related to the nRF5x. So, the RTC runs at 32KHz and I derive the "tick" from the RTC. I have tested this and it appears to work. I modified the clock configuration in soc.h and soc_samd21x.c as follows:

soc.h:

#define SOC_ATMEL_SAM0_OSCULP32K_FREQ_HZ 32768
#define SOC_ATMEL_SAM0_GCLK2_FREQ_HZ SOC_ATMEL_SAM0_OSCULP32K_FREQ_HZ

soc_samd21x.c: gclks_init()

	/* OSCULP32K/32 -> GCLK2 */
	GCLK->GENDIV.reg = GCLK_GENDIV_ID(2) | GCLK_GENDIV_DIV(0);
	wait_gclk_synchronization();

	GCLK->GENCTRL.reg =
	    GCLK_GENCTRL_ID(2) | GCLK_GENCTRL_SRC_OSCULP32K | GCLK_GENCTRL_GENEN;
	wait_gclk_synchronization();

The device tree had to be update to specify GCLK2 as the source for the RTC:
samd.dtsi:

		rtc: rtc@40001400 {
			compatible = "atmel,sam0-rtc";
			reg = <0x40001400 0x1C>;
			interrupts = <3 0>;
			clock-generator = <2>;
			status = "disabled";
			label = "RTC";
		};

And the sam0 rtc code required an update to define the CYCLES_PER_TICK:
sam0_rtc_timer.c:

/* Number of sys timer cycles per on tick. */
#define CYCLES_PER_TICK (CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC \
			 / CONFIG_SYS_CLOCK_TICKS_PER_SEC)

The settings in my board defconfig for the clock values are:

CONFIG_SOC_ATMEL_SAMD_XOSC32K=y
CONFIG_SOC_ATMEL_SAMD_XOSC32K_AS_MAIN=y
CONFIG_CORTEX_M_SYSTICK=n
CONFIG_SAM0_RTC_TIMER=y
CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC=32768
CONFIG_SYS_CLOCK_TICKS_PER_SEC=1000
CONFIG_SYS_POWER_MANAGEMENT=y
CONFIG_SYS_POWER_SLEEP_STATES=y
CONFIG_SYS_PM_MIN_RESIDENCY_SLEEP_1=1
CONFIG_SYS_PM_MIN_RESIDENCY_SLEEP_2=10

If I force the power state to stay in the active state with sys_pm_force_power_state(SYS_POWER_STATE_ACTIVE) it appears to alleviate the symptoms. The symptoms reappear if the state is set to "AUTO".

@andyross
Copy link
Collaborator

andyross commented Dec 3, 2019

But if you use the same RTC driver without SYS_POWER_MANAGEMENT=y, you get reliable behavior?

@bendiscz is the author of that RTC driver. I don't know the hardware, any ideas about whether there might be an interaction with the power management driver detailed here? This definitely smells like a hardware/driver level thing to me.

@sslupsky
Copy link
Contributor Author

sslupsky commented Dec 3, 2019

Yes, if sys_pm_force_power_state(SYS_POWER_STATE_ACTIVE) is executed, system PM is forced "ACTIVE" and none of the system PM sleep states are used.

Everything "appears" to work well from a driver perspective. Timing appears to be fine if I do not use more than one kernel timer. Do kernel timers use other hardware timers?

I am new to Zephyr so have been climbing a steep learning curve on this so I could be missing something here.

@andyross
Copy link
Collaborator

andyross commented Dec 3, 2019

Can you log/trace the calls to z_clock_set_timeout() and the ISR in the RTC driver and make sure the arguments look correct (i.e. if you're told to set a timeout for 82 ticks that the ISR fires 82 ticks later, etc...)?

From the kernel's perspective, nothing changes with regard to the way the timer driver is used. You should see the same sequence of calls with the same arguments (assuming deterministic timing, of course -- in practice if you have external hardware this may be hard to achieve) regardless of what is happening with the PM states. My guess is that what is happening is that the sleep state handling is mucking with interrupt delivery and/or modifying the RTC counter unexpectedly.

@sslupsky
Copy link
Contributor Author

sslupsky commented Dec 4, 2019

I'm still working on this. I have dug a little deeper and I think I have found an issue in z_add_timeout(). When using a periodic timer, (a timer that keeps running), I think one expects that the time period remains constant between timer expiry. The way z_timer_expiration_handler() and z_add_timeout() are implemented, they determine the time of the next timeout relative to the time the function was called using elapsed(). Is this the intended way to calculate the next timeout? I suggest the next timeout should be calculated relative to the last timeout.

I first noticed this when I started to trace the kernel timeouts. I noticed that usually, the timer looses a few milliseconds each time the timer fires (ie: it occurs a few milliseconds earlier), and that the error accumulates over time. I have attached some sample output below. The output is generated by a call to LOG_DBG() when the work function is executed upon expiry of each timer.

With regard to the "LED: Blink" messages, you will note that successive messages are output approximately 230ms early each time for the first four time outs. At the fifth timeout, 00:00:59.462,829, it skips the other direction (late), presumably because something with higher priority delayed it. It is late the next time and then resumes the 230ms early pattern. The other messages are driven from other work functions that trigger from their own k_timer. (Each timer has a period of 10 seconds).

I should probably create a new issue for this?

[00:00:09.768,341] <dbg> myApp.led: LED: Blink
[00:00:09.868,408] <dbg> myApp.bps: BPS: Fetch sample
[00:00:09.868,621] <dbg> myApp.ths: THS: Fetch sample
[00:00:09.868,835] <dbg> myApp.als: ALS: Fetch sample
[00:00:09.869,049] <dbg> myApp.imu: IMU: Fetch sample
[00:00:19.534,820] <dbg> myApp.bps: BPS: Fetch sample
[00:00:19.535,217] <dbg> myApp.ths: THS: Fetch sample
[00:00:19.536,499] <dbg> myApp.led: LED: Blink
[00:00:19.635,925] <dbg> myApp.als: ALS: Fetch sample
[00:00:19.636,138] <dbg> myApp.imu: IMU: Fetch sample
[00:00:29.301,544] <dbg> myApp.bps: BPS: Fetch sample
[00:00:29.301,940] <dbg> myApp.ths: THS: Fetch sample
[00:00:29.303,039] <dbg> myApp.led: LED: Blink
[00:00:29.402,587] <dbg> myApp.als: ALS: Fetch sample
[00:00:29.402,801] <dbg> myApp.imu: IMU: Fetch sample
[00:00:39.068,298] <dbg> myApp.bps: BPS: Fetch sample
[00:00:39.068,695] <dbg> myApp.ths: THS: Fetch sample
[00:00:39.069,976] <dbg> myApp.led: LED: Blink
[00:00:39.171,295] <dbg> myApp.als: ALS: Fetch sample
[00:00:39.171,508] <dbg> myApp.imu: IMU: Fetch sample
[00:00:48.834,136] <dbg> myApp.led: LED: Blink
[00:00:48.934,234] <dbg> myApp.bps: BPS: Fetch sample
[00:00:48.934,448] <dbg> myApp.ths: THS: Fetch sample
[00:00:48.934,661] <dbg> myApp.als: ALS: Fetch sample
[00:00:48.934,875] <dbg> myApp.imu: IMU: Fetch sample
[00:00:58.602,142] <dbg> myApp.als: ALS: Fetch sample
[00:00:58.602,539] <dbg> myApp.imu: IMU: Fetch sample
[00:00:58.603,851] <dbg> myApp.bps: BPS: Fetch sample
[00:00:58.604,064] <dbg> myApp.ths: THS: Fetch sample
[00:00:59.462,829] <dbg> myApp.led: LED: Blink
[00:01:08.368,896] <dbg> myApp.als: ALS: Fetch sample
[00:01:08.369,293] <dbg> myApp.imu: IMU: Fetch sample
[00:01:08.370,056] <dbg> myApp.bps: BPS: Fetch sample
[00:01:08.370,452] <dbg> myApp.ths: THS: Fetch sample
[00:01:09.955,902] <dbg> myApp.led: LED: Blink
[00:01:18.955,718] <dbg> myApp.als: ALS: Fetch sample
[00:01:18.956,115] <dbg> myApp.imu: IMU: Fetch sample
[00:01:18.956,878] <dbg> myApp.bps: BPS: Fetch sample
[00:01:18.957,275] <dbg> myApp.ths: THS: Fetch sample
[00:01:19.722,930] <dbg> myApp.led: LED: Blink
[00:01:28.721,862] <dbg> myApp.ths: THS: Fetch sample
[00:01:28.723,907] <dbg> myApp.als: ALS: Fetch sample
[00:01:28.724,121] <dbg> myApp.imu: IMU: Fetch sample
[00:01:28.724,334] <dbg> myApp.bps: BPS: Fetch sample
[00:01:29.490,509] <dbg> myApp.led: LED: Blink
[00:01:38.489,624] <dbg> myApp.ths: THS: Fetch sample
[00:01:38.491,119] <dbg> myApp.als: ALS: Fetch sample
[00:01:38.491,333] <dbg> myApp.imu: IMU: Fetch sample
[00:01:38.491,546] <dbg> myApp.bps: BPS: Fetch sample
[00:01:39.257,781] <dbg> myApp.led: LED: Blink
[00:01:48.256,622] <dbg> myApp.als: ALS: Fetch sample
[00:01:48.257,019] <dbg> myApp.imu: IMU: Fetch sample
[00:01:48.257,781] <dbg> myApp.bps: BPS: Fetch sample
[00:01:48.258,178] <dbg> myApp.ths: THS: Fetch sample
[00:01:49.025,329] <dbg> myApp.led: LED: Blink
[00:01:58.023,529] <dbg> myApp.als: ALS: Fetch sample
[00:01:58.023,925] <dbg> myApp.imu: IMU: Fetch sample
[00:01:58.024,688] <dbg> myApp.ths: THS: Fetch sample
[00:01:58.025,085] <dbg> myApp.bps: BPS: Fetch sample
[00:01:59.160,278] <dbg> myApp.led: LED: Blink
[00:02:07.790,222] <dbg> myApp.als: ALS: Fetch sample
[00:02:07.790,618] <dbg> myApp.bps: BPS: Fetch sample
[00:02:07.790,832] <dbg> myApp.imu: IMU: Fetch sample
[00:02:07.791,046] <dbg> myApp.ths: THS: Fetch sample
[00:02:08.928,039] <dbg> myApp.led: LED: Blink
[00:02:17.555,603] <dbg> myApp.als: ALS: Fetch sample
[00:02:17.555,999] <dbg> myApp.bps: BPS: Fetch sample
[00:02:17.558,746] <dbg> myApp.imu: IMU: Fetch sample
[00:02:17.558,959] <dbg> myApp.ths: THS: Fetch sample
[00:02:18.695,465] <dbg> myApp.led: LED: Blink
[00:02:27.322,570] <dbg> myApp.als: ALS: Fetch sample
[00:02:27.322,967] <dbg> myApp.bps: BPS: Fetch sample
[00:02:27.324,279] <dbg> myApp.imu: IMU: Fetch sample
[00:02:27.324,493] <dbg> myApp.ths: THS: Fetch sample
[00:02:28.463,073] <dbg> myApp.led: LED: Blink
[00:02:37.089,202] <dbg> myApp.imu: IMU: Fetch sample
[00:02:37.089,599] <dbg> myApp.ths: THS: Fetch sample
[00:02:37.090,911] <dbg> myApp.als: ALS: Fetch sample
[00:02:37.091,125] <dbg> myApp.bps: BPS: Fetch sample
[00:02:38.230,560] <dbg> myApp.led: LED: Blink
[00:02:46.855,712] <dbg> myApp.als: ALS: Fetch sample
[00:02:46.856,109] <dbg> myApp.bps: BPS: Fetch sample
[00:02:46.857,421] <dbg> myApp.imu: IMU: Fetch sample
[00:02:46.857,635] <dbg> myApp.ths: THS: Fetch sample
[00:02:47.998,168] <dbg> myApp.led: LED: Blink
[00:02:56.622,344] <dbg> myApp.imu: IMU: Fetch sample
[00:02:56.622,741] <dbg> myApp.ths: THS: Fetch sample
[00:02:56.624,053] <dbg> myApp.als: ALS: Fetch sample
[00:02:56.624,267] <dbg> myApp.bps: BPS: Fetch sample
[00:02:57.765,655] <dbg> myApp.led: LED: Blink
[00:03:06.388,854] <dbg> myApp.als: ALS: Fetch sample
[00:03:06.389,251] <dbg> myApp.bps: BPS: Fetch sample
[00:03:06.390,563] <dbg> myApp.imu: IMU: Fetch sample
[00:03:06.390,777] <dbg> myApp.ths: THS: Fetch sample
[00:03:07.533,264] <dbg> myApp.led: LED: Blink
[00:03:16.155,487] <dbg> myApp.imu: IMU: Fetch sample
[00:03:16.155,883] <dbg> myApp.ths: THS: Fetch sample
[00:03:16.157,196] <dbg> myApp.als: ALS: Fetch sample
[00:03:16.157,409] <dbg> myApp.bps: BPS: Fetch sample
[00:03:17.300,750] <dbg> myApp.led: LED: Blink

@andyross
Copy link
Collaborator

andyross commented Dec 4, 2019

I'm not sure I understand the test case. Those log messages are prefixed with timeouts that are generated using the same subsystem that is interpreting the ticks. If you're seeing skew between timeout expiration and a call to current time, something bad is going on[1]. By definition, a call to get the current uptime (in ticks) from within a timeout handler should return exactly the expiration tick calculated when it was registered.

Maybe the driver z_clock_elapsed() function is returning something strange in the presence of suspend? That could produce errors like this.

I still recommend looking at this at the driver level first to see what the difference is in kernel-visible behavior between sleep states and normal idle. If you're seing skews on the order of hundreds (!) of 32kHz clock cycles, that's not an infrastructure bug. The driver is lying.

[1] One edge case here is a very delayed interrupt: if the ISR delivery got delayed by more than a full tick (that's a REALLY late interrupt), then it's possible to see a k_uptime_get() call that looks to be in the future relative to where the timer was expecting to land. That might not be completely inconsistent with the behavior you're seeing -- you'd need an external clock source to test that though, log message timestamping won't work.

@sslupsky
Copy link
Contributor Author

sslupsky commented Dec 9, 2019

@andyross I found some issues with the i2c sam0 driver (#21232 and #21233). Since correcting these I haven't noticed the timers going wonky. I am not certain this is the root cause but I haven't observed the problem where the timer period changes between 2 and 18 seconds for a couple days now. So, I think your suggestion that something funky with the interrupts could cause the timers to go wonky might have caused it.

My log continues to show that the timer(s) is timing out about 230ms early and this continues to accumulate ... so this issue still remains.

@carlescufi
Copy link
Member

I forgot to mention that I disabled the SysTick and replaced it with the RTC following some of of the work I found in the Zephyr codebase related to the nRF5x. So, the RTC runs at 32KHz and I derive the "tick" from the RTC. I have tested this and it appears to work. I modified the clock configuration in soc.h and soc_samd21x.c as follows:

Are you using a system timer driver that is not in the tree? Because if that is the case then I suggest you start by posting a Pull Request with the new system timer driver so others can reproduce the problem. Otherwise opening an issue against k_timer when using your own, downstream timer driver will not help much.

@sslupsky
Copy link
Contributor Author

sslupsky commented Mar 8, 2020

@bendiscz @andyross
Here is some additional information regarding this problem. I have looped Martin into this as it appears he had some involvement with the sam0 rtc driver.

I have improved my understanding of what the underlying issues are that cause this behaviour.

Here is a screen shot of four oscilloscope traces.

rtc tick register latency
The top trace (purple) illustrates when the MCU is sleeping. (low = sleep, high = awake).
The second trace (magenta) illustrates the time spent waiting for the RTC register to sync in rtc_sync().
The third trace (green) illustrates i2c SDA.
The fourth trace(yellow) illustrates i2c SCL.

Notable observations:

  1. The total duration of the wake time is approximately 4.5ms.
  2. The primary contributor to the time spent awake is the time spent waiting for the sam0 RTC COUNT and COMP registers to sync.
  3. The kernel appears to access the RTC registers that require syncing about 24 times during the wake cycles shown here.
  4. i2c transactions have long idle periods where the SCL is held low because interrupts are blocked while the rtc registers are syncing.

I've been struggling for several weeks now trying to find a work around for the problem of the excessive latency of the rtc registers in the sam0 devices. I am not certain if other mcu architectures share this latency issue with the timer used for the kernel ticks or not. If they do, they suffer from this problem and this problem has very serious consequences. It causes massive power consumption issues, severe timing issues with the i2c bus and the erratic kernel timers mentioned above. I suspect there are other drivers affected by this problem that cause other symptoms as well.

My first question is, can anything be done with the kernel timeout api to reduce the number of times the rtc registers are accessed? That is, can the rtc count register, calls to z_clock_elapsed() and z_clock_cycle_get_32(), be read once instead of 22 times? Can the number of calls to z_clock_set_timeout() be managed better? Cleaning up the number of times the rtc registers are touched would help immensely with this problem.

So far I haven't had much success reducing the latency of the sam0 rtc register sync. One approach I haven't had a chance to try yet is to use both the SysTick and rtc together. Then, the SysTick would only be used when the mcu is awake and the rtc registers only used to wake from sleep. The SysTick would need to be synced every time the mcu wakes since the SysTick is suspended during sleep. I am uncertain at the moment how to reliably coordinate a sync between the two timers. I would appreciate if anyone could comment if this approach has been considered before and if there are any perceived issues with this approach.

Thanks for your patience. I've spent a great deal of time on these issues and would benefit from some others perspectives and advice.

@sslupsky
Copy link
Contributor Author

sslupsky commented Mar 12, 2020

@andyross Regarding the kernel function sys_power_save_idle(), I have a question regarding the need to call set_timeout(). The next timeout is set when z_clock_announce() is called when the rtc causes the idle exit. If another interrupt causes idle exit then I do not think that will cause a problem. So, it seems redundant to call it again here?

Is the only reason to call set_timeout() here because of what the comment literally says, that is, only because we need to support the CONFIG_TICKLESS_IDLE_THRESH API? If that is the case, could we change this so that if CONFIG_TICKLESS_IDLE_THRESH is set to zero, the call to set_timeout() is avoided?

static void sys_power_save_idle(void)
{
	s32_t ticks = z_get_next_timeout_expiry();

	/* The documented behavior of CONFIG_TICKLESS_IDLE_THRESH is
	 * that the system should not enter a tickless idle for
	 * periods less than that.  This seems... silly, given that it
	 * saves no power and does not improve latency.  But it's an
	 * API we need to honor...
	 */
#ifdef CONFIG_SYS_CLOCK_EXISTS
	z_set_timeout_expiry((ticks < IDLE_THRESH) ? 1 : ticks, true);
#endif

@carlescufi
Copy link
Member

@sslupsky any updates regarding this particular issue?

@sslupsky
Copy link
Contributor Author

@carlescufi Yes, the two PR's I referenced appear to have resolved this. It was due to a weird interaction between the RTC and the I2C drivers for the sam0. The i2c driver did not properly handle the generation of the stop bit so the i2c transaction went on "forever". The RTC driver has large delays due to "synchronization" of the registers. I addressed these issues in the PR's and also added the i2c slave to the i2c driver. I think I may have made a few more tweaks to these drivers since submitting the PR's.

@carlescufi carlescufi added priority: low Low impact/importance bug and removed priority: medium Medium impact/importance bug labels May 26, 2020
@carlescufi carlescufi changed the title k_timer timeouts are erratic when sleeping Invalid interaction between the RTC and the I2C drivers for the sam0 May 26, 2020
@carlescufi carlescufi added the platform: Microchip SAM Microchip SAM Platform (formerly Atmel SAM) label May 26, 2020
sslupsky added a commit to sslupsky/zephyr that referenced this issue Jul 10, 2020
Optimizations:
When using system power management, the bus transactions
can be interrupted by low power modes which causes excessive
power consumption and can cause the transactions to terminate
before completed. Further, other interrupts can cause bus transaction
anomalies.

When using the RESTART flag, the MB and SB bits must be cleared.

This commit prevents interruption of the bus during low power modes and
correctly clears the MB or SB flags when a bus transaction is restarted.
Also fixes an initialization problem with the INACTOUT timer and ensures
the LOWTOUT is properly configured when using i2c_configure().

The GCLK_SERCOM_SLOW clock is used to time the i2c
time-outs and must be configured to use a 32KHz oscillator.
See 28.6.3.1

This commit configures GCLK4 to 32768 Hz

Fixes zephyrproject-rtos#21711, zephyrproject-rtos#21549, zephyrproject-rtos#21233, zephyrproject-rtos#21232, zephyrproject-rtos#21114, zephyrproject-rtos#21092

A problem can arise when back to back transactions occur before
the STOP transaction is complete.

This commit also introduces a busy wait for the STOP to complete
with an timeout in the event there is a bus failure.

Signed-off-by: Steven Slupsky <[email protected]>
sslupsky added a commit to sslupsky/zephyr that referenced this issue Jul 10, 2020
The driver issues a READREQ for every access to the COUNT
register which causes 180us of delay.  This is a large
delay that causes several problems with other systems
including i2c.

This commit uses continuous read requests (RCONT) to
optimize accessing the COUNT register.  This reduces (but does not
eliminate) the time spent synchronizing the COUNT register.
This commit also prevent interrupts from corrupting COUNT and COMP
register access.

The kernel does not like it when the RTC does not tick
after waking from sleep.  RCONT enables continuously syncing the
COUNT register but after sleep, we need to wait for a new read
request to complete.  However, the SYNCBUSY flag is not set
when RCONT is enabled so we cannot use that to do the sync.
So, we wait for the RTC COUNT register to begin ticking again by
reading the COUNT register.

We should wait a minimum of the TICK_THRESHOLD which is the amount
of time it takes to sync the register.

Fixes zephyrproject-rtos#21549, zephyrproject-rtos#21114, zephyrproject-rtos#21092

Fix comment typo

The worst case update time after sleep requires 2 sync busy
synchronizations.

This commit updates the TICK_THRESHOLD to reflect the
additional time required for the additional sync busy.

Fix clock initialisation conflict.
Fix whitespace

mnkp recommended that a separate commit be created for the
system power management.  This commit separates the
power_samd2x.c into a separate commit.

Signed-off-by: Steven Slupsky <[email protected]>
@github-actions
Copy link

This issue has been marked as stale because it has been open (more than) 60 days with no activity. Remove the stale label or add a comment saying that you would like to have the label removed otherwise this issue will automatically be closed in 14 days. Note, that you can always re-open a closed issue at any time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: Timer Timer bug The issue is a bug, or the PR is fixing a bug platform: Microchip SAM Microchip SAM Platform (formerly Atmel SAM) priority: low Low impact/importance bug Stale
Projects
None yet
5 participants