I'm not talking about RCCE. Just write operation of the core.
Even if I use RCCE, It cannot tell the write operation is finished.
For example, let's see the scenario below.
Core 0 Core 1
Writes C to addr A
Read from addr A
Can you tell me Core 1 gets C always?
If core 1 is much closer to memory controller, it can read the address before C gets to memory.
I'm not an expert, and it is my assumption.
If I'm wrong, please explain it. Thanks.
It's probably important to note that the Send and Recv function in RCCE don't go through main memory to transfer data. They transfer data between the cores through the network and the on-tile communication buffer. They use synchronization methods to ensure that the core knows WHEN the data is done. Proximity to the memory controllers doesn't affect anything you're worried about.
Yes! Bryan may just have pointed out what is the key issue on this thread. RCCE message passing occurs through the MBP and bypasses L2. I haven't so far seen any user get concerned about the distance of a core from a memory controller.
Here are a couple of quotes form the Programmer's Guide.
"When a message-passing program sends a message from one core to another, internally it is moving data from the L1 cache of the sending core to its MPB and then to the L1 cache of the receiving core. The MPB allows L1 cache lines to move between cores without having to use the off-chip memory."
"Message-passing data are typed as message-passing buffer type (MPBT). Data typed as MPBT bypass the L2 cache. If you write to data that are already resident in the cache, the cache line may (if L1 is configured as write-through) or may not (if L1 is configured as write-back) be moved to memory."
All writes are acknowledged end to end over the NOC and the P54C has only one write buffer (so only one outstanding write). Therefore, if a producer core writes data to DRAM, then writes a flag to MPB of consumer which is much closer to DRAM than the producer, the DRAM data is guaranteed to be there before the consumer sees the flag change. There is no race due to the two core's different distance from the DRAM.
However, be careful, that presumes you handle caching correctly (or uncacheable R/W are used). Because there is no cache coherence, stale data in the cache of the consumer can cause problems - it will receive no invalidate from the producer's write.
I don't know I understand your article thoroughly.
So, If a core writes to address 0x1000 then writes to address 0xF000, the second write should wait until the acknowledgement of the first write arrives.
Is this right?
If the second one is released before getting the first acknowledge, then it should be a problem.
What about the relationship between a cacheable write and an uncacheable write?
I mean, the caches are not write-allocate.
So, the address is not cached now, the write should go through main memory.
I think uncacheable writes should not be buffered in write buffer, but cacheable writes should be buffered there.
For example, I want to communicate between two cores, so I will use NCM and MPB: NCM for interrupt, MPB for data communication.
The scenario is shown below.
core A core B
1) writes a data into MPB
2) generating interrupt to NCM
3) got an interrupt from core A
4) reading a data from MPB
If core A sends only one data into MPB which can be buffered in the write buffer and an interrupt generated in 2).
According to my assumption, data in MPB can reside in the write buffer, core B cannot read the value core A sent.
Am I right or wrong?
If core A sends some more data than one, it could be a problem because the last data core A sent cannot be propagated to other cores.