Skip to content

fixed small error #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed

Conversation

0xflotus
Copy link

I fixed a small typo error.

@zhuowei
Copy link

zhuowei commented May 12, 2019

@0xflotus We didn't change this file: this is a typo in upstream Swift itself.

You might want to send this to the apple/swift repository.

@0xflotus 0xflotus closed this May 12, 2019
pull bot pushed a commit that referenced this pull request Apr 2, 2021
…est2

[CodeCompletion] Migrate some tests to batch completion test #2
pull bot pushed a commit that referenced this pull request Jun 26, 2021
* Synchronize both versions of actor_counters.swift test

* Synchronize on Job address

Make sure to synchronize on Job address (AsyncTasks are Jobs, but not
all Jobs are AsyncTasks).

* Add fprintf debug output for TSan acquire/release

* Add tsan_release edge on task creation

without this, we are getting false data races between when a task
is created and immediately scheduled on a different thread.

False positive for `Sanitizers/tsan/actor_counters.swift` test:
```
WARNING: ThreadSanitizer: data race (pid=81452)
  Read of size 8 at 0x7b2000000560 by thread T5:
    #0 Counter.next() <null>:2 (a.out:x86_64+0x1000047f8)
    #1 (1) suspend resume partial function for worker(identity:counters:numIterations:) <null>:2 (a.out:x86_64+0x100005961)
    #2 swift::runJobInEstablishedExecutorContext(swift::Job*) <null>:2 (libswift_Concurrency.dylib:x86_64+0x280ef)

  Previous write of size 8 at 0x7b2000000560 by main thread:
    #0 Counter.init(maxCount:) <null>:2 (a.out:x86_64+0x1000046af)
    #1 Counter.__allocating_init(maxCount:) <null>:2 (a.out:x86_64+0x100004619)
    #2 runTest(numCounters:numWorkers:numIterations:) <null>:2 (a.out:x86_64+0x100006d2e)
    #3 swift::runJobInEstablishedExecutorContext(swift::Job*) <null>:2 (libswift_Concurrency.dylib:x86_64+0x280ef)
    #4 main <null>:2 (a.out:x86_64+0x10000a175)
```

New edge with this change:
```
[4357150208] allocate task 0x7b3800000000, parent = 0x0
[4357150208] creating task 0x7b3800000000 with parent 0x0
[4357150208] tsan_release on 0x7b3800000000    <<< new release edge
[139088221442048] tsan_acquire on 0x7b3800000000
[139088221442048] trying to switch from executor 0x0 to 0x7ff85e2d9a00
[139088221442048] switch failed, task 0x7b3800000000 enqueued on executor 0x7ff85e2d9a00
[139088221442048] enqueue job 0x7b3800000000 on executor 0x7ff85e2d9a00
[139088221442048] tsan_release on 0x7b3800000000
[139088221442048] tsan_release on 0x7b3800000000
[4357150208] tsan_acquire on 0x7b3800000000
counters: 1, workers: 1, iterations: 1
[4357150208] allocate task 0x7b3c00000000, parent = 0x0
[4357150208] creating task 0x7b3c00000000 with parent 0x0
[4357150208] tsan_release on 0x7b3c00000000    <<< new release edge
[139088221442048] tsan_acquire on 0x7b3c00000000
[4357150208] task 0x7b3800000000 waiting on task 0x7b3c00000000, going to sleep
[4357150208] tsan_release on 0x7b3800000000
[4357150208] tsan_release on 0x7b3800000000
[139088221442048] getting current executor 0x0
[139088221442048] tsan_release on 0x7b3c00000000
...
```

rdar://78932849

* Add static_cast<Job *>()

* Move TSan release edge to swift_task_enqueueGlobal()

Move the TSan release edge from `swift_task_create_commonImpl()` to
`swift_task_enqueueGlobalImpl()`.  Task creation itself is not an event
that needs synchronization, but rather that task creation "happens
before" execution of that task on another thread.

This edge is usually added when the task is scheduled via
`swift_task_enqueue()` (which then usually calls
`swift_task_enqueueGlobal()`).  However, not all task scheduling goes
through the `swift_task_enqueue()` funnel as some places call the more
specific `swift_task_enqueueGlobal()` directly.  So let's annotate this
function (duplicate edges aren't harmful) to ensure we cover all
schedule events, including newly-created tasks (our original problem
here).

rdar://78932849

Co-authored-by: Julian Lettner <[email protected]>
pull bot pushed a commit that referenced this pull request Oct 24, 2021
pull bot pushed a commit that referenced this pull request Dec 2, 2021
Case #1. Literal zero = natural alignment
   %1 = integer_literal $Builtin.Int64, 0
   %2 = builtin "uncheckedAssertAlignment"
	(%0 : $Builtin.RawPointer, %1 : $Builtin.Int64) : $Builtin.RawPointer
   %3 = pointer_to_address %2 : $Builtin.RawPointer to [align=1] $*Int

   Erases the `pointer_to_address` `[align=]` attribute:

Case #2. Literal nonzero = forced alignment.

   %1 = integer_literal $Builtin.Int64, 16
   %2 = builtin "uncheckedAssertAlignment"
	(%0 : $Builtin.RawPointer, %1 : $Builtin.Int64) : $Builtin.RawPointer
   %3 = pointer_to_address %2 : $Builtin.RawPointer to [align=1] $*Int

   Promotes the `pointer_to_address` `[align=]` attribute to a higher value.

Case #3. Folded dynamic alignment

   %1 = builtin "alignof"<T>(%0 : $@thin T.Type) : $Builtin.Word
   %2 = builtin "uncheckedAssertAlignment"
	(%0 : $Builtin.RawPointer, %1 : $Builtin.Int64) : $Builtin.RawPointer
   %3 = pointer_to_address %2 : $Builtin.RawPointer to [align=1] $*T

   Erases the `pointer_to_address` `[align=]` attribute.
pull bot pushed a commit that referenced this pull request Sep 23, 2022
While trying to reuse the liveness-points analysis originally in DI for
injecting actor hops for more general purposes, Pavel and I discovered
that the point at which we are injecting the hops might not have
fully-computed the liveness information.

That appears to be the case because we were computing the fully-initialized
points before having processed destroy/releases of TheMemory. While this
most likely had no influence on the actor hop injection, it does affect
what the outgoing AvailabilitySet contains for a block. In particular, for
this example:

```swift
struct X {
  init(cond: Bool) {
    var _storage: (name: String, age: Int)
    _storage.name = ""
    if cond {
      _storage.age = 30
    } else {
      _storage.age = 40
    }
  }
}
```

But because we are determine the full initialization points before processing
the destroy, the liveness analysis doesn't iterate to correctly determine the
out-availability of block 1 and 3 (corresponding to the then and else blocks
in the example above). Here's the debug output showing that issue:

```
*** Definite Init looking at:   %5 = mark_uninitialized [var] %4 : $*(name: String, age: Int) // users: %37, %12, %22, %32

Get liveness 0, #1 at   assign %11 to %13 : $*String                    // id: %14
Get liveness 1, #1 at   assign %21 to %23 : $*Int                       // id: %24
  Get liveness for block 1
    Iteration 0
    Result: (yn)
Get liveness 1, #1 at   assign %31 to %33 : $*Int                       // id: %34
  Get liveness for block 3
    add block 2 to worklist
    Iteration 0
      Block 2 out: (yn)
    Iteration 1
      Block 2 out: (yn)
    Result: (yn)
full-init-finder: rejecting bb0 b/c non-Yes OUT avail
full-init-finder: rejecting bb1 b/c non-Yes OUT avail
full-init-finder: rejecting bb2 b/c no non-load uses.
full-init-finder: rejecting bb3 b/c non-Yes OUT avail
full-init-finder: rejecting bb4 b/c no non-load uses.
Get liveness 0, #2 at   destroy_addr %5 : $*(name: String, age: Int)    // id: %37
  Get liveness for block 4
    add block 3 to worklist
    add block 1 to worklist
    Iteration 0
      Block 1 out: (yy)
      Block 3 out: (yy)
    Iteration 1
      Block 1 out: (yy)
      Block 3 out: (yy)
    Result: (yy)
```

So, this patch basically just sinks the computation so it happens after, so that
we force the incremental liveness analysis to also consider the liveness at the
point of the destroy, but before having done any other transformations or modifications
to the CFG to handle a destroy of something partially initialized.
kateinoigakukun added a commit that referenced this pull request Jun 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants