First, there is not really a midplane that is impacting the performance of the MX240.
You have direct connections between each of the DPCs (actually it is per PFE on each of them) and each of the SCBs, which are implementing the switch fabric.
Today, the total throughput of the device is limited by the capacity on the DPCs. If you take a DPC with 4 PFEs (4x10GE or 40x1GE), you would get a maximum of 40Gbps I/O traffic. One SCB is enough to do line-rate, so this tells you we have at least 40Gbps of a DPC into the fabric.
However, you might know that we have a unique way of supporting multicast across the MX-series (or any other distribute PFE system like M120, M320 or T-series). Indeed, we are using internal binary trees to replicate traffic from one PFE to another PFE.
We do support line-rate multicast from one 10GE port to all other 10GE ports in the chassis. This means we need 20Gbps per PFE towards the fabric. And that's indeed the case.
So, to summarize, each DPC has 80Gbps of capacity towards a single-SCB fabric on MX240.
Each SCB's capacity is obviously enough to receive all this traffic and send it across.
BTW, note that the MX240 has actually more available capacity per line-card than an MX960. Indeed, you have only 1/4th of the line cards and you get halve of the switching power (ie 1 active SCB in stead of 2), so...
Maybe not what you expected 😉