You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/// Protected against aliasing violations from other pointers.
65
+
///
66
+
/// Items protected like this cause UB when they are invalidated, *but* the pointer itself may
67
+
/// still be used to issue a deallocation.
68
+
///
69
+
/// This is required for LLVM IR pointers that are `noalias` but *not* `dereferenceable`.
70
+
WeakProtector,
71
+
72
+
/// Protected against any kind of invalidation.
73
+
///
74
+
/// Items protected like this cause UB when they are invalidated or the memory is deallocated.
75
+
/// This is strictly stronger protection than `WeakProtector`.
76
+
///
77
+
/// This is required for LLVM IR pointers that are `dereferenceable` (and also allows `noalias`).
78
+
StrongProtector,
79
+
}
80
+
61
81
/// An item on the per-location stack, controlling which pointers may access this location and how.
62
82
pubstructItem {
63
83
/// The permission this item grants.
64
84
perm:Permission,
65
85
/// The pointers the permission is granted to.
66
86
tag:Tag,
67
87
/// An optional protector, ensuring the item cannot get popped until `CallId` is over.
68
-
protector:Option<CallId>,
88
+
protector:Option<(ProtectorKind, CallId)>,
69
89
}
90
+
70
91
/// Per-location stack of borrow items.
71
92
pubstructStack {
72
93
/// Used *mostly* as a stack; never empty.
@@ -159,13 +180,12 @@ To attach metadata to a particular function call, we assign a fresh ID to every
159
180
In other words, the per-stack-frame `CallId` is initialized by `Tracking::new_call`.
160
181
161
182
We say that a `CallId` is *active* if the call stack contains a stack frame with that ID.
162
-
In the following, we pretend there exists a function `call_is_active(id)` that can check this.
163
183
164
184
**Note**: Miri uses a slightly more complex system to track the set of active `CallId`; that is just an optimization to avoid having to scan the call stack all the time.
165
185
166
186
### Preliminaries for items
167
187
168
-
For brevity, we will write `(tag: perm)` to represent `Item { tag, perm, protector: None }`, and `(tag: perm; call)` to represent `Item { tag, perm, protector: Some(call) }`.
188
+
For brevity, we will write `(tag: perm)` to represent `Item { tag, perm, protector: None }`, and `(tag: perm; kind, call)` to represent `Item { tag, perm, protector: Some((kind, call)) }`.
169
189
170
190
The following defines whether a permission grants a particular kind of memory access to a pointer with the right tag:
171
191
`Unique` and `SharedReadWrite` grant all accesses, `SharedReadOnly` grants only read access.
@@ -221,12 +241,13 @@ When allocating memory, we have to initialize the `Stack` associated with the ne
221
241
222
242
### Accessing memory
223
243
224
-
On every memory access, we perform the following extra operation for every location that gets accessed (i.e., for a 4-byte access, this happens for each of the 4 bytes):
244
+
On every memory access (reads and writes -- see below for deallocation),
245
+
we perform the following extra operation for every location that gets accessed (i.e., for a 4-byte access, this happens for each of the 4 bytes):
225
246
226
247
1. Find the granting item. If there is none, this is UB.
227
248
2. Check if this is a read access or a write access.
228
-
- For write accesses, pop all *blocks* above the one containing the granting item. That is, remove all items above the granting one, except if the granting item is a `SharedReadWrite` in which case the consecutive `SharedReadWrite` above it are kept (but everything beyond is popped).
229
-
- For read accesses, disable all `Unique` items above the granting one: change their permission to `Disabled`. This means they cannot be used any more. We do not remove them from the stack to avoid merging two blocks of `SharedReadWrite`.
249
+
- For write accesses, pop all *blocks* above the one containing the granting item. That is, remove all items above the granting one, except if the granting item is a `SharedReadWrite` in which case the consecutive `SharedReadWrite` above it are kept (but everything beyond is popped). If any of the popped items is protected (weakly or strongly) with a `CallId` of an active call, we have UB.
250
+
- For read accesses, disable all `Unique` items above the granting one: change their permission to `Disabled`. This means they cannot be used any more. We do not remove them from the stack to avoid merging two blocks of `SharedReadWrite`. If any disabled item is protected (weakly or strongly) with a `CallId` of an active call, we have UB.
230
251
231
252
### Reborrowing
232
253
@@ -256,7 +277,7 @@ To reborrow a pointer, we are given:
256
277
-a (typed) place, i.e., alocationinmemory, atagandthetypeofthedataweexpectthere (fromwhichwecancomputethesize);
@@ -328,8 +345,10 @@ When executing `Retag(kind, place)`, we check if `place` holds a reference (`&[m
328
345
For those we perform the following steps:
329
346
330
347
1. We compute a fresh tag: `Tracking::new_ptr_id()`.
331
-
2. We determine if we will want to protect the items we are going to generate:
332
-
This is the case only if `kind == FnEntry` and the type of this pointer is a reference (not a box).
348
+
2. We determine if and how will want to protect the items we are going to generate:
349
+
If `kind == FnEntry`, then a protector will be added; for references, we use a `StrongProtector`, for box a `WeakProtector`.
350
+
(This means for both of them there is UB if the pointer gets invalidated while the call is active; and for references, additionally there is UB if the memory the pointer points to gets deallocated in anyway -- even if the pointer itself is used for that deallocation.)
351
+
For other `kind`, no protector is added.
333
352
3. We perform reborrowing of the memory this pointer points to with the new tag and indicating whether we want protection, treating boxes as `RefKind::Unique { two_phase: false }`.
334
353
335
354
We do not recurse into fields of structs or other compound types, only "bare" references/... get retagged.
@@ -340,7 +359,8 @@ We never recurse through a pointer indirection.
340
359
### Deallocating memory
341
360
342
361
Memory deallocation first acts like a write access through the pointer used for deallocation.
343
-
After that is done, we additionally check all protectors remaining in the stack: if any of them is still active, we have undefined behavior.
362
+
After that is done, we additionally check all *strong* protectors remaining in the stack: if any of them is still active, we have undefined behavior.
0 commit comments