kernel/utilities/leasable_buffer.rs
1// Licensed under the Apache License, Version 2.0 or the MIT License.
2// SPDX-License-Identifier: Apache-2.0 OR MIT
3// Copyright Tock Contributors 2022.
4
5//! Defines a SubSlice type to implement leasable buffers.
6//!
7//! A leasable buffer decouples maintaining a reference to a buffer from the
8//! presentation of the accessible buffer. This allows layers to operate on
9//! "windows" of the buffer while enabling the original reference (and in effect
10//! the entire buffer) to be passed back in a callback.
11//!
12//! Challenge with Normal Rust Slices
13//! ---------------------------------
14//!
15//! Commonly in Tock we want to partially fill a static buffer with some data,
16//! call an asynchronous operation on that data, and then retrieve that buffer
17//! via a callback. In common Rust code, that might look something like this
18//! (for this example we are transmitting data using I2C).
19//!
20//! ```rust,ignore
21//! // Statically declare the buffer. Make sure it is long enough to handle all
22//! // I2C operations we need to perform.
23//! let buffer = static_init!([u8; 64], [0; 64]);
24//!
25//! // Populate the buffer with our current operation.
26//! buffer[0] = OPERATION_SET;
27//! buffer[1] = REGISTER;
28//! buffer[2] = 0x7; // Value to set the register to.
29//!
30//! // Call the I2C hardware to transmit the data, passing the slice we actually
31//! // want to transmit and not the full buffer.
32//! i2c.write(buffer[0..3]);
33//! ```
34//!
35//! The issue with this is that within the I2C driver, `buffer` is now only
36//! three bytes long. When the I2C driver issues the callback to return the
37//! buffer after the transmission completes, the returned buffer will have a
38//! length of three. Effectively, the full static buffer is lost.
39//!
40//! To avoid this, in Tock we always call operations with both the buffer and a
41//! separate length. We now have two lengths, the provided `length` parameter
42//! which is the size of the buffer actually in use, and `buffer.len()` which is
43//! the full size of the static memory.
44//!
45//! ```rust,ignore
46//! // Call the I2C hardware with a reference to the full buffer and the length
47//! // of that buffer it should actually consider.
48//! i2c.write(buffer, 3);
49//! ```
50//!
51//! Now the I2C driver has a reference to the full buffer, and so when it
52//! returns the buffer via callback the client will have access to the full
53//! static buffer.
54//!
55//! Challenge with Buffers + Length
56//! -------------------------------
57//!
58//! Using a reference to the buffer and a separate length parameter is
59//! sufficient to address the challenge of needing variable size buffers when
60//! using static buffers and complying with Rust's memory management. However,
61//! it still has two drawbacks.
62//!
63//! First, all code in Tock that operates on buffers must correctly handle the
64//! separate buffer and length values as though the `buffer` is a `*u8` pointer
65//! (as in more traditional C code). We lose many of the benefits of the higher
66//! level slice primitive in Rust. For example, calling `buffer.len()` when
67//! using data from the buffer is essentially meaningless, as the correct length
68//! is the `length` parameter. When copying data _to_ the buffer, however, not
69//! overflowing the buffer is critical, and using `buffer.len()` _is_ correct.
70//! With separate reference and length managing this is left to the programmer.
71//!
72//! Second, using only a reference and length assumes that the contents of the
73//! buffer will always start at the first entry in the buffer (i.e.,
74//! `buffer[0]`). To support more generic use of the buffer, we might want to
75//! pass a reference, length, _and offset_, so that we can use arbitrary regions
76//! of the buffer, while again retaining a reference to the original buffer to
77//! use in callbacks.
78//!
79//! For example, in networking code it is common to parse headers and then pass
80//! the payload to upper layers. With slices, that might look something like:
81//!
82//! ```rust,ignore
83//! // Check for a valid header of size 10.
84//! if (valid_header(buffer)) {
85//! self.client.payload_callback(buffer[10..]);
86//! }
87//! ```
88//!
89//! The issue is that again the client loses access to the beginning of the
90//! buffer and that memory is lost.
91//!
92//! We might also want to do this when calling lower-layer operations to avoid
93//! moving and copying data around. Consider a networking layer that needs to
94//! add a header, we might want to do something like:
95//!
96//! ```rust,ignore
97//! buffer[11] = PAYLOAD;
98//! network_layer_send(buffer, 11, 1);
99//!
100//! fn network_layer_send(buffer: &'static [u8], offset: usize, length: usize) {
101//! buffer[0..11] = header;
102//! lower_layer_send(buffer);
103//! }
104//! ```
105//!
106//! Now we have to keep track of two parameters which are both redundant with
107//! the API provided by Rust slices.
108//!
109//! Leasable Buffers
110//! ----------------
111//!
112//! A leasable buffer is a data structure that addresses these challenges.
113//! Simply, it provides the Rust slice API while internally always retaining a
114//! reference to the full underlying buffer. To narrow a buffer, the leasable
115//! buffer can be "sliced". To retrieve the full original memory, a leasable
116//! buffer can be "reset".
117//!
118//! A leasable buffer can be sliced multiple times. For example, as a buffer is
119//! parsed in a networking stack, each layer can call slice on the leasable
120//! buffer to remove that layer's header before passing the buffer to the upper
121//! layer.
122//!
123//! Supporting Mutable and Immutable Buffers
124//! ----------------------------------------
125//!
126//! One challenge with implementing leasable buffers in rust is preserving the
127//! mutability of the underlying buffer. If a mutable buffer is passed as an
128//! immutable slice, the mutability of that buffer is "lost" (i.e., when passed
129//! back in a callback the buffer will be immutable). To address this, we must
130//! implement two versions of a leasable buffer: mutable and immutable. That way
131//! a mutable buffer remains mutable.
132//!
133//! Since in Tock most buffers are mutable, the mutable version is commonly
134//! used. However, in cases where size is a concern, immutable buffers from
135//! flash storage may be preferable. In those cases the immutable version may
136//! be used.
137//!
138//! Usage
139//! -----
140//!
141//! `slice()` is used to set the portion of the `SubSlice` that is accessible.
142//! `reset()` makes the entire `SubSlice` accessible again. Typically, `slice()`
143//! will be called prior to passing the buffer down to lower layers, and
144//! `reset()` will be called once the `SubSlice` is returned via a callback.
145//!
146//! ```rust
147//! # use kernel::utilities::leasable_buffer::SubSlice;
148//! let mut internal = ['a', 'b', 'c', 'd'];
149//! let original_base_addr = internal.as_ptr();
150//!
151//! let mut buffer = SubSlice::new(&mut internal);
152//!
153//! buffer.slice(1..3);
154//!
155//! assert_eq!(buffer.as_ptr(), unsafe { original_base_addr.offset(1) });
156//! assert_eq!(buffer.len(), 2);
157//! assert_eq!((buffer[0], buffer[1]), ('b', 'c'));
158//!
159//! buffer.reset();
160//!
161//! assert_eq!(buffer.as_ptr(), original_base_addr);
162//! assert_eq!(buffer.len(), 4);
163//! assert_eq!((buffer[0], buffer[1]), ('a', 'b'));
164//!
165//! ```
166
167// Author: Amit Levy
168
169use core::cmp::min;
170use core::ops::{Bound, Range, RangeBounds};
171use core::ops::{Index, IndexMut};
172use core::slice::SliceIndex;
173
174/// A mutable leasable buffer implementation.
175///
176/// A leasable buffer can be used to pass a section of a larger mutable buffer
177/// but still get the entire buffer back in a callback.
178#[derive(Debug, PartialEq)]
179pub struct SubSliceMut<'a, T> {
180 internal: &'a mut [T],
181 active_range: Range<usize>,
182}
183
184impl<'a, T> From<&'a mut [T]> for SubSliceMut<'a, T> {
185 fn from(internal: &'a mut [T]) -> Self {
186 let active_range = 0..(internal.len());
187 Self {
188 internal,
189 active_range,
190 }
191 }
192}
193
194/// An immutable leasable buffer implementation.
195///
196/// A leasable buffer can be used to pass a section of a larger mutable buffer
197/// but still get the entire buffer back in a callback.
198#[derive(Debug, PartialEq)]
199pub struct SubSlice<'a, T> {
200 internal: &'a [T],
201 active_range: Range<usize>,
202}
203
204impl<'a, T> From<&'a [T]> for SubSlice<'a, T> {
205 fn from(internal: &'a [T]) -> Self {
206 let active_range = 0..(internal.len());
207 Self {
208 internal,
209 active_range,
210 }
211 }
212}
213
214/// Holder for either a mutable or immutable SubSlice.
215///
216/// In cases where code needs to support either a mutable or immutable SubSlice,
217/// `SubSliceMutImmut` allows the code to store a single type which can
218/// represent either option.
219pub enum SubSliceMutImmut<'a, T> {
220 Immutable(SubSlice<'a, T>),
221 Mutable(SubSliceMut<'a, T>),
222}
223
224impl<'a, T> From<&'a [T]> for SubSliceMutImmut<'a, T> {
225 fn from(value: &'a [T]) -> Self {
226 Self::Immutable(value.into())
227 }
228}
229
230impl<'a, T> From<&'a mut [T]> for SubSliceMutImmut<'a, T> {
231 fn from(value: &'a mut [T]) -> Self {
232 Self::Mutable(value.into())
233 }
234}
235
236impl<'a, T> SubSliceMutImmut<'a, T> {
237 pub fn reset(&mut self) {
238 match *self {
239 SubSliceMutImmut::Immutable(ref mut buf) => buf.reset(),
240 SubSliceMutImmut::Mutable(ref mut buf) => buf.reset(),
241 }
242 }
243
244 /// Returns the length of the currently accessible portion of the
245 /// SubSlice.
246 pub fn len(&self) -> usize {
247 match *self {
248 SubSliceMutImmut::Immutable(ref buf) => buf.len(),
249 SubSliceMutImmut::Mutable(ref buf) => buf.len(),
250 }
251 }
252
253 pub fn slice<R: RangeBounds<usize>>(&mut self, range: R) {
254 match *self {
255 SubSliceMutImmut::Immutable(ref mut buf) => buf.slice(range),
256 SubSliceMutImmut::Mutable(ref mut buf) => buf.slice(range),
257 }
258 }
259
260 pub fn as_ptr(&self) -> *const T {
261 match *self {
262 SubSliceMutImmut::Immutable(ref buf) => buf.as_ptr(),
263 SubSliceMutImmut::Mutable(ref buf) => buf.as_ptr(),
264 }
265 }
266
267 pub fn map_mut(&mut self, f: impl Fn(&mut SubSliceMut<'a, T>)) {
268 match self {
269 SubSliceMutImmut::Immutable(_) => (),
270 SubSliceMutImmut::Mutable(subslice) => f(subslice),
271 }
272 }
273}
274
275impl<T, I> Index<I> for SubSliceMutImmut<'_, T>
276where
277 I: SliceIndex<[T]>,
278{
279 type Output = <I as SliceIndex<[T]>>::Output;
280
281 fn index(&self, idx: I) -> &Self::Output {
282 match *self {
283 SubSliceMutImmut::Immutable(ref buf) => &buf[idx],
284 SubSliceMutImmut::Mutable(ref buf) => &buf[idx],
285 }
286 }
287}
288
289impl<'a, T> SubSliceMut<'a, T> {
290 /// Create a SubSlice from a passed reference to a raw buffer.
291 pub fn new(buffer: &'a mut [T]) -> Self {
292 let len = buffer.len();
293 SubSliceMut {
294 internal: buffer,
295 active_range: 0..len,
296 }
297 }
298
299 fn active_slice(&self) -> &[T] {
300 &self.internal[self.active_range.clone()]
301 }
302
303 fn active_slice_mut(&mut self) -> &mut [T] {
304 &mut self.internal[self.active_range.clone()]
305 }
306
307 /// Retrieve the raw buffer used to create the SubSlice. Consumes the
308 /// SubSlice.
309 pub fn take(self) -> &'a mut [T] {
310 self.internal
311 }
312
313 /// Resets the SubSlice to its full size, making the entire buffer
314 /// accessible again.
315 ///
316 /// This should only be called by layer that created the SubSlice, and not
317 /// layers that were passed a SubSlice. Layers which are using a SubSlice
318 /// should treat the SubSlice as a traditional Rust slice and not consider
319 /// any additional size to the underlying buffer.
320 ///
321 /// Most commonly, this is called once a sliced leasable buffer is returned
322 /// through a callback.
323 pub fn reset(&mut self) {
324 self.active_range = 0..self.internal.len();
325 }
326
327 /// Returns the length of the currently accessible portion of the SubSlice.
328 pub fn len(&self) -> usize {
329 self.active_slice().len()
330 }
331
332 /// Returns a pointer to the currently accessible portion of the SubSlice.
333 pub fn as_ptr(&self) -> *const T {
334 self.active_slice().as_ptr()
335 }
336
337 pub fn as_mut_ptr(&mut self) -> *mut T {
338 self.active_slice_mut().as_mut_ptr()
339 }
340
341 /// Returns a slice of the currently accessible portion of the
342 /// LeasableBuffer.
343 pub fn as_slice(&mut self) -> &mut [T] {
344 &mut self.internal[self.active_range.clone()]
345 }
346
347 /// Returns `true` if the LeasableBuffer is sliced internally.
348 ///
349 /// This is a useful check when switching between code that uses
350 /// LeasableBuffers and code that uses traditional slice-and-length. Since
351 /// slice-and-length _only_ supports using the entire buffer it is not valid
352 /// to try to use a sliced LeasableBuffer.
353 pub fn is_sliced(&self) -> bool {
354 self.internal.len() != self.len()
355 }
356
357 /// Reduces the range of the SubSlice that is accessible.
358 ///
359 /// This should be called whenever a layer wishes to pass only a portion of
360 /// a larger buffer to another layer.
361 ///
362 /// For example, if the application layer has a 1500 byte packet buffer, but
363 /// wishes to send a 250 byte packet, the upper layer should slice the
364 /// SubSlice down to its first 250 bytes before passing it down:
365 ///
366 /// ```rust,ignore
367 /// let buffer = static_init!([u8; 1500], [0; 1500]);
368 /// let s = SubSliceMut::new(buffer);
369 /// s.slice(0..250);
370 /// network.send(s);
371 /// ```
372 pub fn slice<R: RangeBounds<usize>>(&mut self, range: R) {
373 let start = match range.start_bound() {
374 Bound::Included(s) => *s,
375 Bound::Excluded(s) => *s + 1,
376 Bound::Unbounded => 0,
377 };
378
379 let end = match range.end_bound() {
380 Bound::Included(e) => *e + 1,
381 Bound::Excluded(e) => *e,
382 Bound::Unbounded => self.active_range.end - self.active_range.start,
383 };
384
385 let new_start = min(self.active_range.start + start, self.active_range.end);
386 let new_end = min(new_start + (end - start), self.active_range.end);
387
388 self.active_range = Range {
389 start: new_start,
390 end: new_end,
391 };
392 }
393}
394
395impl<T, I> Index<I> for SubSliceMut<'_, T>
396where
397 I: SliceIndex<[T]>,
398{
399 type Output = <I as SliceIndex<[T]>>::Output;
400
401 fn index(&self, idx: I) -> &Self::Output {
402 &self.internal[self.active_range.clone()][idx]
403 }
404}
405
406impl<T, I> IndexMut<I> for SubSliceMut<'_, T>
407where
408 I: SliceIndex<[T]>,
409{
410 fn index_mut(&mut self, idx: I) -> &mut Self::Output {
411 &mut self.internal[self.active_range.clone()][idx]
412 }
413}
414
415impl<'a, T> SubSlice<'a, T> {
416 /// Create a SubSlice from a passed reference to a raw buffer.
417 pub fn new(buffer: &'a [T]) -> Self {
418 let len = buffer.len();
419 SubSlice {
420 internal: buffer,
421 active_range: 0..len,
422 }
423 }
424
425 fn active_slice(&self) -> &[T] {
426 &self.internal[self.active_range.clone()]
427 }
428
429 /// Retrieve the raw buffer used to create the SubSlice. Consumes the
430 /// SubSlice.
431 pub fn take(self) -> &'a [T] {
432 self.internal
433 }
434
435 /// Resets the SubSlice to its full size, making the entire buffer
436 /// accessible again.
437 ///
438 /// This should only be called by layer that created the SubSlice, and not
439 /// layers that were passed a SubSlice. Layers which are using a SubSlice
440 /// should treat the SubSlice as a traditional Rust slice and not consider
441 /// any additional size to the underlying buffer.
442 ///
443 /// Most commonly, this is called once a sliced leasable buffer is returned
444 /// through a callback.
445 pub fn reset(&mut self) {
446 self.active_range = 0..self.internal.len();
447 }
448
449 /// Returns the length of the currently accessible portion of the SubSlice.
450 pub fn len(&self) -> usize {
451 self.active_slice().len()
452 }
453
454 /// Returns a pointer to the currently accessible portion of the SubSlice.
455 pub fn as_ptr(&self) -> *const T {
456 self.active_slice().as_ptr()
457 }
458
459 /// Returns a slice of the currently accessible portion of the
460 /// LeasableBuffer.
461 pub fn as_slice(&self) -> &[T] {
462 &self.internal[self.active_range.clone()]
463 }
464
465 /// Returns `true` if the LeasableBuffer is sliced internally.
466 ///
467 /// This is a useful check when switching between code that uses
468 /// LeasableBuffers and code that uses traditional slice-and-length. Since
469 /// slice-and-length _only_ supports using the entire buffer it is not valid
470 /// to try to use a sliced LeasableBuffer.
471 pub fn is_sliced(&self) -> bool {
472 self.internal.len() != self.len()
473 }
474
475 /// Reduces the range of the SubSlice that is accessible.
476 ///
477 /// This should be called whenever a layer wishes to pass only a portion of
478 /// a larger buffer to another layer.
479 ///
480 /// For example, if the application layer has a 1500 byte packet buffer, but
481 /// wishes to send a 250 byte packet, the upper layer should slice the
482 /// SubSlice down to its first 250 bytes before passing it down:
483 ///
484 /// ```rust,ignore
485 /// let buffer = unsafe {
486 /// core::slice::from_raw_parts(core::ptr::addr_of!(_ptr_in_flash), 1500)
487 /// };
488 /// let s = SubSlice::new(buffer);
489 /// s.slice(0..250);
490 /// network.send(s);
491 /// ```
492 pub fn slice<R: RangeBounds<usize>>(&mut self, range: R) {
493 let start = match range.start_bound() {
494 Bound::Included(s) => *s,
495 Bound::Excluded(s) => *s + 1,
496 Bound::Unbounded => 0,
497 };
498
499 let end = match range.end_bound() {
500 Bound::Included(e) => *e + 1,
501 Bound::Excluded(e) => *e,
502 Bound::Unbounded => self.active_range.end - self.active_range.start,
503 };
504
505 let new_start = min(self.active_range.start + start, self.active_range.end);
506 let new_end = min(new_start + (end - start), self.active_range.end);
507
508 self.active_range = Range {
509 start: new_start,
510 end: new_end,
511 };
512 }
513}
514
515impl<T, I> Index<I> for SubSlice<'_, T>
516where
517 I: SliceIndex<[T]>,
518{
519 type Output = <I as SliceIndex<[T]>>::Output;
520
521 fn index(&self, idx: I) -> &Self::Output {
522 &self.internal[self.active_range.clone()][idx]
523 }
524}
525
526#[cfg(test)]
527mod test {
528
529 use crate::utilities::leasable_buffer::SubSliceMut;
530 use crate::utilities::leasable_buffer::SubSliceMutImmut;
531
532 #[test]
533 fn subslicemut_create() {
534 let mut b: [u8; 100] = [0; 100];
535 let s = SubSliceMut::new(&mut b);
536 assert_eq!(s.len(), 100);
537 }
538
539 #[test]
540 fn subslicemut_edit_middle() {
541 let mut b: [u8; 10] = [0; 10];
542 let mut s = SubSliceMut::new(&mut b);
543 s.slice(5..10);
544 s[0] = 1;
545 s.reset();
546 assert_eq!(s.as_slice(), [0, 0, 0, 0, 0, 1, 0, 0, 0, 0]);
547 }
548
549 #[test]
550 fn subslicemut_double_slice() {
551 let mut b: [u8; 10] = [0; 10];
552 let mut s = SubSliceMut::new(&mut b);
553 s.slice(5..10);
554 s.slice(2..5);
555 s[0] = 2;
556 s.reset();
557 assert_eq!(s.as_slice(), [0, 0, 0, 0, 0, 0, 0, 2, 0, 0]);
558 }
559
560 #[test]
561 fn subslicemut_double_slice_endopen() {
562 let mut b: [u8; 10] = [0; 10];
563 let mut s = SubSliceMut::new(&mut b);
564 s.slice(5..10);
565 s.slice(3..);
566 s[0] = 3;
567 s.reset();
568 assert_eq!(s.as_slice(), [0, 0, 0, 0, 0, 0, 0, 0, 3, 0]);
569 }
570
571 #[test]
572 fn subslicemut_double_slice_beginningopen1() {
573 let mut b: [u8; 10] = [0; 10];
574 let mut s = SubSliceMut::new(&mut b);
575 s.slice(5..10);
576 s.slice(..3);
577 s[0] = 4;
578 s.reset();
579 assert_eq!(s.as_slice(), [0, 0, 0, 0, 0, 4, 0, 0, 0, 0]);
580 }
581
582 #[test]
583 fn subslicemut_double_slice_beginningopen2() {
584 let mut b: [u8; 10] = [0; 10];
585 let mut s = SubSliceMut::new(&mut b);
586 s.slice(..5);
587 s.slice(..3);
588 s[0] = 5;
589 s.reset();
590 assert_eq!(s.as_slice(), [5, 0, 0, 0, 0, 0, 0, 0, 0, 0]);
591 }
592
593 #[test]
594 fn subslicemut_double_slice_beginningopen3() {
595 let mut b: [u8; 10] = [0; 10];
596 let mut s = SubSliceMut::new(&mut b);
597 s.slice(2..5);
598 s.slice(..3);
599 s[0] = 6;
600 s.reset();
601 assert_eq!(s.as_slice(), [0, 0, 6, 0, 0, 0, 0, 0, 0, 0]);
602 }
603
604 #[test]
605 #[should_panic]
606 fn subslicemut_double_slice_panic1() {
607 let mut b: [u8; 10] = [0; 10];
608 let mut s = SubSliceMut::new(&mut b);
609 s.slice(2..5);
610 s.slice(..3);
611 s[3] = 1;
612 }
613
614 #[test]
615 #[should_panic]
616 fn subslicemut_double_slice_panic2() {
617 let mut b: [u8; 10] = [0; 10];
618 let mut s = SubSliceMut::new(&mut b);
619 s.slice(4..);
620 s.slice(..3);
621 s[3] = 1;
622 }
623
624 #[test]
625 fn subslicemut_slice_nop() {
626 let mut b: [u8; 10] = [0; 10];
627 let mut s = SubSliceMut::new(&mut b);
628 s.slice(0..10);
629 assert!(!s.is_sliced());
630 }
631
632 #[test]
633 fn subslicemut_slice_empty() {
634 let mut b: [u8; 10] = [0; 10];
635 let mut s = SubSliceMut::new(&mut b);
636 s.slice(1..1);
637 assert_eq!(s.len(), 0);
638 }
639
640 #[test]
641 fn subslicemut_slice_down() {
642 let mut b: [u8; 100] = [0; 100];
643 let mut s = SubSliceMut::new(&mut b);
644 s.slice(0..50);
645 assert_eq!(s.len(), 50);
646 }
647
648 #[test]
649 fn subslicemut_slice_up() {
650 let mut b: [u8; 100] = [0; 100];
651 let mut s = SubSliceMut::new(&mut b);
652 s.slice(0..200);
653 assert_eq!(s.len(), 100);
654 }
655
656 #[test]
657 fn subslicemut_slice_up_ptr() {
658 let mut b: [u8; 100] = [0; 100];
659 let mut s = SubSliceMut::new(&mut b);
660 s.slice(0..200);
661 assert_eq!(s.as_slice().len(), 100);
662 }
663
664 #[test]
665 fn subslicemut_slice_outside() {
666 let mut b: [u8; 10] = [0; 10];
667 let mut s = SubSliceMut::new(&mut b);
668 s.slice(20..25);
669 assert_eq!(s.len(), 0);
670 }
671
672 #[test]
673 fn subslicemut_slice_beyond() {
674 let mut b: [u8; 10] = [0; 10];
675 let mut s = SubSliceMut::new(&mut b);
676 s.slice(6..15);
677 assert_eq!(s.len(), 4);
678 }
679
680 fn slice_len1<T>(mut s: SubSliceMutImmut<T>) {
681 s.slice(4..8);
682 s.slice(0..2);
683 assert_eq!(s.len(), 2);
684 }
685
686 fn slice_len2<T>(mut s: SubSliceMutImmut<T>) {
687 s.slice(4..8);
688 s.slice(3..);
689 assert_eq!(s.len(), 1);
690 }
691
692 fn slice_len3<T>(mut s: SubSliceMutImmut<T>) {
693 s.slice(4..8);
694 s.slice(..);
695 assert_eq!(s.len(), 4);
696 }
697
698 fn slice_len4<T>(mut s: SubSliceMutImmut<T>) {
699 s.slice(5..);
700 s.slice(4..);
701 assert_eq!(s.len(), 1);
702 }
703
704 fn slice_len5<T>(mut s: SubSliceMutImmut<T>) {
705 s.slice(5..);
706 s.slice(5..);
707 assert_eq!(s.len(), 0);
708 }
709
710 #[test]
711 fn subslicemut_slice_len1() {
712 let mut b: [u8; 10] = [0; 10];
713 slice_len1(b.as_mut().into())
714 }
715
716 #[test]
717 fn subslicemut_slice_len2() {
718 let mut b: [u8; 10] = [0; 10];
719 slice_len2(b.as_mut().into())
720 }
721
722 #[test]
723 fn subslicemut_slice_len3() {
724 let mut b: [u8; 10] = [0; 10];
725 slice_len3(b.as_mut().into())
726 }
727
728 #[test]
729 fn subslicemut_slice_len4() {
730 let mut b: [u8; 10] = [0; 10];
731 slice_len4(b.as_mut().into())
732 }
733
734 #[test]
735 fn subslicemut_slice_len5() {
736 let mut b: [u8; 10] = [0; 10];
737 slice_len5(b.as_mut().into())
738 }
739
740 #[test]
741 fn subslice_slice_len1() {
742 let b: [u8; 10] = [0; 10];
743 slice_len1(b.as_ref().into())
744 }
745
746 #[test]
747 fn subslice_slice_len2() {
748 let b: [u8; 10] = [0; 10];
749 slice_len2(b.as_ref().into())
750 }
751
752 #[test]
753 fn subslice_slice_len3() {
754 let b: [u8; 10] = [0; 10];
755 slice_len3(b.as_ref().into())
756 }
757
758 #[test]
759 fn subslice_slice_len4() {
760 let b: [u8; 10] = [0; 10];
761 slice_len4(b.as_ref().into())
762 }
763
764 #[test]
765 fn subslice_slice_len5() {
766 let b: [u8; 10] = [0; 10];
767 slice_len5(b.as_ref().into())
768 }
769
770 fn slice_contents1(mut s: SubSliceMutImmut<u8>) {
771 s.slice(4..8);
772 s.slice(0..2);
773 assert_eq!(s[0], 4);
774 assert_eq!(s[1], 5);
775 }
776
777 fn slice_contents2(mut s: SubSliceMutImmut<u8>) {
778 s.slice(2..);
779 s.slice(5..);
780 assert_eq!(s[0], 7);
781 assert_eq!(s[1], 8);
782 assert_eq!(s[2], 9);
783 }
784
785 #[test]
786 fn subslicemut_slice_contents1() {
787 let mut b: [u8; 10] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
788 slice_contents1(b.as_mut().into())
789 }
790
791 #[test]
792 fn subslicemut_slice_contents2() {
793 let mut b: [u8; 10] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
794 slice_contents2(b.as_mut().into())
795 }
796
797 #[test]
798 fn subslice_slice_contents1() {
799 let b: [u8; 10] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
800 slice_contents1(b.as_ref().into())
801 }
802
803 #[test]
804 fn subslice_slice_contents2() {
805 let b: [u8; 10] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
806 slice_contents2(b.as_ref().into())
807 }
808
809 fn reset_contents(mut s: SubSliceMutImmut<u8>) {
810 s.slice(4..8);
811 s.slice(0..2);
812 s.reset();
813 assert_eq!(s[0], 0);
814 assert_eq!(s[1], 1);
815 assert_eq!(s[2], 2);
816 assert_eq!(s[3], 3);
817 assert_eq!(s[4], 4);
818 assert_eq!(s[5], 5);
819 assert_eq!(s[6], 6);
820 assert_eq!(s[7], 7);
821 assert_eq!(s[8], 8);
822 assert_eq!(s[9], 9);
823 }
824
825 #[test]
826 fn subslicemut_reset_contents() {
827 let mut b: [u8; 10] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
828 reset_contents(b.as_mut().into())
829 }
830
831 #[test]
832 fn subslice_reset_contents() {
833 let b: [u8; 10] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
834 reset_contents(b.as_ref().into())
835 }
836
837 fn reset_panic(mut s: SubSliceMutImmut<u8>) -> u8 {
838 s.reset();
839 s[s.len()]
840 }
841
842 #[test]
843 #[should_panic]
844 fn subslicemut_reset_panic() {
845 let mut b: [u8; 10] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
846 reset_panic(b.as_mut().into());
847 }
848
849 #[test]
850 #[should_panic]
851 fn subslice_reset_panic() {
852 let b: [u8; 10] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
853 reset_panic(b.as_ref().into());
854 }
855}