2 Replies Latest reply on Oct 19, 2011 5:38 AM by aprell

    The Go language's concurrency constructs on the SCC

    aprell

      Hey everyone,

       

      I have implemented most of the runtime support that is needed to experiment with Go-style concurrency on the SCC. It's about goroutines and channels, for those familiar with Go. Goroutines describe concurrently executing functions or computations in general. Channels are a way to communicate and synchronize between goroutines by exchanging messages. I think channels are the interesting part here because their implementation can take advantage of the on-die message buffers. You can read about it in this draft paper (link to PDF in Google Docs). Of course, I appreciate any kind of feedback, as well as your comments and suggestions.

       

      Thanks,

       

      Andreas

        • 1. Re: The Go language's concurrency constructs on the SCC
          mwvantol

          Hi Andreas, an interesting read, a bit similar to our work on implementing the SVP model of concurrency actually of which we hope to present a paper at the upcoming MARC symposium.

           

          I had some questions though after reading your paper, but perhaps this also stems from my unfamiliarity with Go. In the introduction it states; "Instead of using locks to guard access to shared data, programmers are encouraged to pass around references and thereby transfer ownership so that only one thread is allowed to access the data at any one time.". To me, this implies that Go has a notion of a shared memory, which is not necessarily synchronizing, and that's why messages are used for synchronization. However, what are the assumptions on this memory? What is the consistency model that is assumed? And how do you cover this with your implementation, this is not trivial on the distributed non-coherent shared memory of the SCC, and it's definately not trivial to do this efficiently I assume from your paper that you only support transferring data through the channels now, which is the case for the two examples in the appendix. Is that correct?

           

          A second question that arose was; when you delegate the execution of a thread to a different core in the SCC, how do you make sure this core has the instruction stream to execute this thread? I assume it is a solution where you simply start the same binary on all cores and have only one marked as a 'master' which starts executing the Go program? (this is actually what we do).

          • 2. Re: The Go language's concurrency constructs on the SCC
            aprell

            Hi Michiel,

             

            Thanks for your comments. Looking forward to reading about your work on SVP!

             

            You are right in both cases. Go is shared memory at its core, with full support for "threads and locks". But Go encourages programming at a higher level than that. The idea for safe concurrency is to use channels and message passing to do all communication and synchronization. Now, whether passing references over channels as a means to sort out who is allowed to access what in shared memory, is always a good idea, is a different question. I guess not. For those cases where message passing turns out to be unnecessarily complex, you can always resort to using shared memory primitives. The goroutines in my two example programs communicate exclusively over channels. No shared memory was involved so far.

             

            Yes, I am still running on top of RCCE. You can see that in the examples, which contain calls to RCCE_init and RCCE_finalize. Programs are started with rccerun and the executable gets loaded on each core. One core, the master thread, executes the code between TASKING_init and TASKING_exit. The other cores enter a scheduling loop and wait for goroutines to run.