Skip to content

Memory mapping section #3

@yupferris

Description

@yupferris

First of all, this is awesome; fantastic effort, and after a (somewhat quick) scan this is just lovely :) Awesome stuff!!

In the memory mapping section, you mention you're not entirely sure how memory mapping works at the hardware level, and that it might be that each peripheral will shut itself off when addressed out of its mapped range. I wanted to provide some insight into how this is usually done; hopefully you find it as interesting as I do and I hope it helps :)

Typically, many of the peripherals in these systems are going to be off-the-shelf components (RAM, IO controllers, etc), and adding extra hardware inside them for memory mapping would require additional chips to be fab'ed specifically for this application, which would be prohibitively expensive. Instead, the responsibility of memory mapping should put in the system designer's hands, so that each component is only responsible for one thing, and can thus can potentially be used in a broader scope of applications.

Most peripherals from this era were still communicating with parallel interfaces. This (as opposed to serial) makes them a bit simpler to use and allows them to transfer more data per clock tick, at the expense of requiring more wires between the components. For example, a 4 kilobyte ROM might have:

  • 12 address pins
  • 8 data pins
  • 1 enable pin
  • VCC/ground etc

Now, in terms of memory mapping, these systems would have employed additional hardware for this task. This hardware would sit between the address/data pins on the CPU and the address/data/enable/etc pins for the peripherals, and, depending on the address the CPU would place on its address pins, route the address lines to the peripherals and enable/disable them. This mostly just comes down to various simple gates/comparators that directly drive the peripherals' pins from the CPU's address pins, but the key here is it would have had to have been very simple and fast, as the CPU's timing constraints would have been pretty strict. This is one of the reasons dedicated hardware was employed - another being the sheer number of pins required and board space it would have reduced.

Now, I'm not particularly familiar with the PS1 internals, but after checking up on the wikipedia page it looks like the "System Control Coprocessor (Cop0)" is the thing gluing all this together. Thinking of it this way, we can also see that it makes sense that it would also handle interrupts, breakpoints, etc. We can think of this bit of glue logic as something similar to the north/south bridge chips in modern motherboards - it's really just an I/O breakout to offload logistics from the CPU and tie everything together without taking up all your board space.

Other examples of such chips would be the C64's PLA, and I believe the two 74139's in the NES.

Anyways, hope you find this little rant useful :) I'm pretty inspired by this writeup you're doing, and I've wanted to do an N64 emulator in Rust for a while now - perhaps I'll follow suit and do that in this style as well!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions