Announcement

Collapse
No announcement yet.

RFC: community effort to build portable, scalable debugger protocol

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • RFC: community effort to build portable, scalable debugger protocol

    Hello all!

    This is a request for comments (not to be confused with IETF RFCs) from the hacking community. I am aware that many of you have been engaged in a project hosted here to develop a suitable debugger environment for PS2. If you are unfamiliar with my nick, I have also developed debuggers for different game consoles and emulators, in the past. Usually this was a self-contained effort, with other people providing indirect support during development or direct support after the project had stabilized. There are also several other individuals and groups on my 'hit list' for this RFC, and I will be sure to get those people involved as well.

    Some of you have known that I've had a "big idea" for quite some time. And we're all aware that "big ideas" solve nothing unless they manifest into something tangible and useful. The "big idea" I speak of has been described to me as "one debugger to rule them all." This observation may not be far from the truth; I intend to write a debugger [interface] capable of handling any binary for any architecture you throw at it (of course, so long as it has the proper knowledge of that architecture; things like a disassembler, memory maps, etc, which must be easy for the average user to write, if they don't already exist).

    That's part of the whole picture. Actually, that's my personal piece, where I will be expending most of my own energy. That's the "Kodewerx side" of this community effort project. The "community side" would revolve around volunteers from the different hacking communities getting together to formalize a debugger protocol that is portable, scalable, efficient, extensible, and generic enough to cover a wide range (ideally all) of debugging environments.

    I'll try to put this into perspective using Project Artemis as a basis, since you are likely familiar with it already. With a remote debugger like Artemis, you invariably have at least two components: A low-level debugger which resides on your target platform (in this case, a PS2) and a high-level debugger interface with which the hacker interacts to perform complex debugging tasks. The debugger interface here is being designed as a sort of modular component; that is, the low-level debugger can be controlled through a high-level interface on the PS2 itself, or through a high-level interface living on a separate host machine (likely a computer running Windows XP).

    This modularity plays an important role in the justification behind designing a debugger protocol as a community effort. As an example, let's speculate that the authors of the PC-side debugger interface for Project Artemis create the De Facto PS2 debugger interface. On the one hand, this debugger interface may be "locked" to Project Artemis because it could be written with a very specialized protocol to communicate with (send data to and receive data from) the PS2.

    Imagine that someone then wanted to use this De Facto debugger interface to connect to a PS2 emulator. This means the emulator would have to be hacked to emulate the strange debugger protocol, or the interface would have to be hacked to support a new protocol that is easier to integrate with the emulator, or (worst case) both the emulator and interface would have to be hacked together into a single monolithic monstrosity.

    On the other hand, if a "standard" debugger protocol had been devised previously, which allows for debugger interface connections between a multitude of environments (a separate machine over a wire, another programming running on the same machine, etc) then it would be far easier to hack the emulator to support the standard protocol; the interface would work with this hacked emulator out-of-the-box. Even better, the newly hacked emulator would also support the lesser PS2 debugger interfaces which also support the standard protocol. Now you've taken modularity of debugging environments to a whole new level not seen before.


    Anyway, before this whole post gets out of hand with a horde of silly examples and rambling, let me just put it all this way:

    We all need to get together as a team to describe and develop one "debugger protocol to rule them all." Just remember, this is not the Kodewerx project briefly described above, and also keep in mind that the debugger protocol needs enough heft to support far more than just the PS2. Because who knows what other people might use it for today, and who knows what you might use it for tomorrow!

    For some additional information (a lot of reading!) I have written several articles on this subject, as well as the Kodewerx-oriented project. Here are the relevant links:

    * Debugging Modern Computer Architecturs - A slightly updated edition of the original forum post I made several months ago to explain the "big idea."
    * Universal Debugger Protocol - A placeholder page which currently describes some of my own personal ideas for how this whole protocol may work. These are NOT mandates; just ideas to help spark your own creativity and get the ball rolling toward a final standardization process.
    * Universal Debugger Project - Focuses on the Kodewerx-oriented debugger interface project, with some interesting history on how the idea came about, and several use cases for the kind of protocol we need.
    * Descend to the Low Level - A blog entry I completed last night which focuses entirely on the Kodewerx-oriented debugger interface and its much overdue birth.


    Now that I've presented my case (perhaps over-presented), I'm requesting comments from you, the hacking community at large, for the debugger protocol ideas presented. And also, if I have your attention, how we can work together to achieve these (as well as other) goals in the best interest of everyone involved.

    And if you have any questions (I would not be surprised!) just ask. I understand this will be a huge undertaking, and most of the finer details are still flying around in my head at light speed, even though I've really tried to get them all out in writing.

    Well, there you have it! Please read the links I pasted for you, even if you don't have interest after just reading through this forum post... Maybe I can change your mind with the massive amounts of debugging utopia I've explained on those pages.

  • #2
    Wow Parasyte gracing us with his presence, glad to have you here.
    Spoiler Alert! Click to view...

    THE BAD GUY!!!!!!

    Comment


    • #3
      As mentioned before, I've had my eye on the Universal Debugger concept since you first posed it quite a while ago. I'm quite interested; whatever I can toss in, I'll toss in. Though that may not be much in the way of low-level development ability, I have other ways of rendering aid.

      A framework such as the one you envision would be a massive breakthrough in not only the video game hacking community, not only the debugging community, but the computer science community as a whole.
      I may be lazy, but I can...zzzZZZzzzZZZzzzZZZ...

      Comment


      • #4
        I'd help, but I just use debuggers, and almost never handle modifications to them. Best that I can say is that I can think of features from a myriad of debuggers that are quite handy, and sensible to implement. I'm sure the same could be said of Parasyte as well, though.

        The big thing that I see is that, should you be successful enough, the whole setup will be besieged with bloatware requests. So many good programs have been reduced by bloatware that it only seems reasonable to keep it simple, and make other people customize it one way or another(Boolean definitions sent to the thing to disable unsupported options would be a start). Knowing your previous accomplishments, I think now would be a good time to draw a line, so you'll know what you and all other participants are working towards throughout the project.

        I do have to admit that what you're proposing sounds a lot like the MAME debugger in it's multi-platform implementation. If you're able, a MAME-style debugger replacement might be a good way to run a multi-platform debugger through the ringer, or at least get some ideas about the implementation.
        This reality is mine. Go hallucinate your own.

        Comment


        • #5
          I suppose the first distinction I need to make here is that the debugger interface and the debugger protocol are two very separate projects; one relies on the other (interface relies on protocol), but not the other way around. I would like to get people involved, first, in the protocol. Getting that in place means that you could theoretically use any super-light-weight interface you like; it just needs to support an established protocol. In this way, any bloat from the KW project becomes negligible. This, I think, is the real beautify of modularity.

          That said, I'm not opposed to discussing the decisions I'm making on the interface. Mozilla is not the fastest or most robust foundation to build on, but it's stable, mature, and extremely extensible. The extensibility has been one of the selling points for me; I'm not going to personally 'bloat' the interface with every single debugging feature under the sun. But someone might want to. (Think "Eclipse IDE" ... I truly hate it, but someone out there obviously likes including even the kitchen sink in these kinds of development tools!) Being built like Firefox, such a debugger interface can also be extended like Firefox. Bloat can come in that form of after-market third-party add-ons, instead of pre-packaged with a complete suite of odds and ends.

          Getting the topic back on track with the protocol, maybe I am taking this the wrong way? I'd like to get community participation on this one, so that everyone has a say on what they need. I'd rather not leave it up to one person to define the thing (which explains why the ideas I've given so far have been scarce, but I suppose it needs a kick start of some kind). If that's the way it has to be, then no one will use it, because it will not have the features that they need.

          It's also possible that I haven't been entirely clear about what I'm looking for. So let me restate the question real quick: Imagine you were writing a web browser; a client-side application which must somehow talk to a server on the other side of the world. Now imagine that HTTP has not been invented, yet... you've never heard of it. How do you define the interaction between the client and server to get what you need? In fact, what kind of information do you need to send to the server, and what kind of information do you need to get back? And what happens when you get back information that your client does not expect, or does not understand?

          This is the question: What data does a debugger need to communicate to the interface (and vice-versa) and how does that data get from point-A to point-B?

          Let's talk!
          Last edited by Parasyte; 11-02-2008, 05:38:30 PM.

          Comment


          • #6
            Client-to-Debugger:
            Needs to be able to send a Memory snapshot on request, or in total, to the client, to be able to read out the hardware registers, to be able to tell the debugger if it's in an environment that such data can be changed, to be able to send some confirmation/synchronization signals to allow for breakpoints and memory breakpoints, to send internal operating name, and revision data in case a flaw is found, so that specialized code can cover up for potential defects, to accept a 'stream of thought' style of data so that it can be interpreted by external programs to the debugger in a provided format. That's a good start.

            Debugger-to-Client:
            Needs to be able to send memory/register modifications to the client, give an internal revision number for similar reasons as to above, respond with synchronization to tell the debugger if you've crashed and if it should, by it's internal decisions, continue execution, wait for a reconnect, or resume execution. It may be wise to be able to send custom code through the debugging interface, so that otherwise limited debuggers could have on-the-fly functions assembled into some spare RAM(the glut comes from seeing if you can make the debugger side mildly interpretive, and able to modify the ASM based on added info about the ASM's structure), should someone wish to write such code. Should be capable of handling Save-State-styled imports, for systems that would be able to handle such an addition. Could probably do with some file interpretation code, so it can read in settings, in case it'll be on a device of some sort.

            Unexpected Data:
            Unexpected data should be relatively simple to ignore. Give the commands a defined size in the header(I'm thinking SGB, with it's packets and VRAM transfers), as well as a quick CRC of the data after the last byte of the size specified, followed by some seperation bytes for even more confirmation in case you want to make some sort of large data packet. If both use the same basic confirmation, it would be simpler to see if something's going wrong.

            I also suspect that if you want to do this correctly, you'll want the option of defining some suggested free RAM space and notes in some way, using external files, based on the system and program being debugged.

            This seems like a lot, and I suppose it is, but I see all of this as a set of basic features that should be extensible to plugins, should someone wish to do weird things with it that would qualify more definitely as bloat.
            This reality is mine. Go hallucinate your own.

            Comment


            • #7
              Nice documentation you've got so far. I printed out the Wiki pages and will read them carefully ASAP. Sounds very interesting.

              Without any doubt, Artemis and other projects would benefit from such a debugger protocol. Of course, I'm willing to contribute, as far as possible.

              Comment


              • #8
                ugetab: this is a pretty good start. And I see some good objective thinking, here. I want to respond to your suggestions in more detail as I get time.

                misfire: please excuse any typos and grammatical errors. I haven't proofread any of that, so if you point out any mistakes, I'll be sure to fix them. Also (it is a wiki, after all) you can make small corrections or add to the documents yourself, if you'd like.


                I'll be back with some good followups later. For now I need some sleep (going to bed way early tonight).

                Comment


                • #9
                  OK, here is that proper response I promised.

                  Originally posted by ugetab View Post
                  Client-to-Debugger:
                  Needs to be able to send a Memory snapshot on request, or in total, to the client, to be able to read out the hardware registers, to be able to tell the debugger if it's in an environment that such data can be changed, to be able to send some confirmation/synchronization signals to allow for breakpoints and memory breakpoints, to send internal operating name, and revision data in case a flaw is found, so that specialized code can cover up for potential defects, to accept a 'stream of thought' style of data so that it can be interpreted by external programs to the debugger in a provided format. That's a good start.
                  Grabbing the contents of memory and registers is important, but it is more important to know what memory and registers are available. This, I imagine, could be queried by the client. The debugger would respond with a 'features map' of some sort. It would contain information such as the specific types CPUs available, memory maps (both accessible and inaccessible to all CPUs. An example of an 'inaccessible' memory map is the data stored on a memory card; this data is usually not mapped to any CPU, but is available through some sort of serial bus), hardware capabilities, debugger capabilities, etc.

                  Some examples of hardware capabilities that can be reported: the types of any 'expansion hardware' connected, such as ethernet cards and USB devices... Some examples of debugger capabilities that can be reported: hardware breakpoints/watchpoints (and the total number available), software breakpoints/watchpoints, 'save state' snapshots, etc.

                  I imagine defining some sort of tell-all structure like this is the first step. For cases where a very minimal 'light weight' implementation is required, we should define a client-side array of 'pre-defined architectures' such as bare-bones PS2, PSP, GameCube, NDS, etc. Then the debugger could respond to any architecture queries with a very small response code that can indicate one of the pre-defined architectures. Then the client can gather the heavier information it needs from the pre-defined structure(s). Does this make sense?

                  In pseudo-code, instead of the debugger building and sending a big structure like this:
                  (note this is not meant to reflect an actual PSX)
                  Code:
                  CAPABILITIES_RESPONSE {
                    cpu[0]: mips r4000,
                    mem_map[0]: {
                      ram: 0x80000000 - 0x82000000,
                      scratchpad: 0x1FC00000 - 0x20000000,
                      ...
                    }
                    memcard: spi_bus,
                    ...
                  }
                  It would instead just send a very simple response code:
                  Code:
                  PREDEFINED_CAPABILITIES_RESPONSE {
                    PSX
                  }

                  Now I think just covering this one small piece of the puzzle, we can agree that this will be a lot more work than just listing some things we would like to see. Even with this amount of data in a fairly simple structure, we still haven't defined how this structure is actually formed as a data stream; it's not going to be a textual representation, which rules out some common serializations like XML and JSON. :\

                  Comment


                  • #10
                    I don't have nearly your experience, but this is what I'm going with for a response.

                    I think that you mean to say that by defining the system, certain constraints on the input and output will be present, and can be forced, so that these responses can be relied upon. I also caught the idea about how using an existing structure would dramatically reduce the data sent setting up the environment and confirming some assumptions.

                    Something I can't seem to escape from is the idea of a 'normal' and a 'short' style of communication. If it always sends the same info in the same order, then the extras for the system may simply include a string like "16C2835", with each of those numbers representing a pre-programmed small packet structure(like AND 3 for 2^? data size(split for each register), int(num/4) for the info being represented, such as registers,RAM,custom output), and the order received/indicated when sending. Why I can't get off of this idea is puzzling, so I'll just throw it out here and let you figure out possible motivations.

                    Part of building a low-level debugger is understanding the architecture.

                    You can have it send something a bit closer to this, so you don't need to keep the program and it's files permanently updated:

                    GB:
                    CAPABILITIES_RESPONSE {
                    registers[0]: "af|bc|de|hl|sp|pc|ime|ima|lcdc|stat|ly|cnt|ie|if| spd",
                    mem_map[0]: {
                    ram: 0x0000C000 - 0x0000DFFF,
                    ram: 0x0000FF80 - 0x0000FFFE,
                    scratchpad: 0x0000FF90 - 0x0000FFF0,
                    ...
                    }
                    }

                    Then, have the low level debugger send data that fits the array of info(including non-numeric info, optionally)

                    It's a good idea to include architecture files to reduce the workload, but there may be cases when it's better to let the authors put together what they want internally.

                    The "memcard: spi_bus," may well be simple to document if the order the external devices are presented is consistent with an internal array in the debugger. If memcard1=inserted {bus[0] = true} else {bus[0] = false}, then send the bus[] array to the debugger. An array approach may allow for speedy update of the status of elements of the system that change, without actually redefining much of the system's definition setup.

                    That's all that I'm getting to come to mind right now. Hope it's at least vaguely useful.
                    This reality is mine. Go hallucinate your own.

                    Comment


                    • #11
                      Originally posted by ugetab View Post
                      Part of building a low-level debugger is understanding the architecture.
                      I know exactly what you mean by that; when I wrote GCNrd, I had no concept of 'defining' the architecture to the client, because the client already knew everything about the GCN architecture (I taught it well).

                      From the low-level debugger perspective, the low-level debugger will always know everything about the architecture it targets (the authors will teach it well). And to some extent, the client side will usually be well aware of the architecture it targets. But from the protocol perspective, it should know nothing about architectures or the type of data being sent; its only purpose in life is passing information. It should be androgynous, and unprejudiced about what it carries. The only way I can figure to do this in a completely cross-architecture manner is by defining certain boundaries for the data.

                      As an example, on GB you only have 8/16-bit registers. If the client is trained to expect this, then all is well. But if the client understands how to speak to a GameBoy and also knows how to speak to a GBA (where registers are 32-bit), then it must have some idea of which target it's dealing with or it will become very easily confused.

                      Automating some simple grunt-work (like automatically detecting what kind of target the client is connecting to) during an initialization phase seems like an elegant solution. At the very least, it will make developing and using multi-target-capable clients (debugger interfaces) much easier; for one thing, the user will not have to specifically choose whether she wants to debug a GB or a GBA... you remember that the low-level debugger already knows this information anyway! And it's probably a good thing to share with the client.


                      (Sorry if this seems like an inappropriate response. I just feel the need to clarify my motivations behind this idea.)


                      With that out of the way, I think your suggestion for keeping arrays structured the same as they are defined in the capabilities/initialization is something that should be taken very seriously; it would simplify matters in a number of ways.
                      Last edited by Parasyte; 11-07-2008, 10:41:20 PM.

                      Comment


                      • #12
                        What I was trying to convey was the idea that the low-level debugger can send the information it wants displayed, as well as tidbits of info on it(like the F register in AF being a comparison operator with E and C comparison bits in it). This initializes the environment of the debugger one time, likely to be a simple part of the connection initialization(which would also keep it from being a show-stopper if a full client reinitialization is needed). The packets then become something like:
                        010800000000000000010A = 01 for the area, like the register group, 08 for the data length, that many bytes in data, and a 1-byte CRC directly after the data for confirmation. The client then interprets the data as it's been initialized to do so, such as af, bc, de, etc. word data. The protocol simply tells the debugger what type of data it's sending, packs it together, then lets the client sort out the details. I put in the initialization info idea because that may not be too large an amount to send about unsupported architectures(though, I'm not sure on this point), or possibly in cases like MAME which have an enormous number of combined architectures possible. In cases where the CPU and memory constraints are documented, your predefined referencing system is perfect though. A PSX emulator only has a PSX environment. The MAME emulator could need Z80 * 2 + 6502 + a pretty highly customized ROM map for double-decoded data to display(Super PANG) + an unusual RAM write exclusion. The second example isn't the norm, but that's where the idea stemmed from.
                        This reality is mine. Go hallucinate your own.

                        Comment


                        • #13
                          I've created a [very simple] flow chart to graph the overall dialogue of communication between a client (user interface) and server (low-level debugger). This may not introduce new ideas, and may be missing some very important steps outside of this generalized view of the process. Again, this is more complex than most of the things I've developed in the past, but the main concepts are the same. This is how I envision the protocol will be utilized. Bear in mind, the protocol may be connecting two remote machines, or two processes within the same environment, and ideally with a good choice of options to make these connections.

                          Anyway, the chart itself should be fairly self-explanatory:


                          Upon further review, the "process and respond to commands" sections on the debugger side should be one large block; After initialization, the debugger should become a strict server which does nothing but process and respond to commands. :P I will probably update the flow chart later to reflect this ideology.
                          Last edited by Parasyte; 11-10-2008, 12:29:27 AM.

                          Comment


                          • #14
                            OK, the chart has been updated. Pretty simple, basic stuff.

                            Comment


                            • #15
                              I have been trying to think of every possible useful feature for a debugger to have, to build a sane protocol from that perspective. However, I am taking a minimalist approach; If a complex feature can be implemented by combining several smaller, more generic features (which may also be useful in other complex features!) then I opt to break it down into the generic feature set.

                              So far, I have been able to identify 6 independent groups of "operations" that the protocol must define. Using these 6 operation groups, every conceivable debugger feature should be possible to implement. Here are the groups, with some example operations for each:

                              Code:
                              1) Diagnostics (Init, Shutdown, Ping/Pong, Reset, ...)
                              2) CPU handling (Get/set process states, register read/write, arbitrary code execution, general CPU control, ...)
                              3) Memory handling (read, write, address conversion, hardware I/O, cache control, ...)
                              4) Breakpoint handling (add, delete, edit, get, ...)
                              5) Stream handling (stdin/stdout/stderr, debugger-specific messages, ...)
                              6) Vendor-specific (custom command sets, should be discouraged unless absolutely necessary)
                              A few examples are in order to understand how this small subset of operations should be able to implement any conceivable debugger feature (a big statement):

                              To implement a feature like audio/video recording on a video game console, a client could set breakpoints on vblank and audio buffer requests (breakpoint handler group). When each of these hits, vmem is dumped (memory handler group) and converted/encoded into a suitable video container format, and the same for the audio buffer.

                              For a feature like trace logging, the client would use a 'step' breakpoint and record the state changes at each hit. The recording could then be used to 'step backwards' by manipulating the CPU (CPU handler group) with the inverse of operations in the recorded trace. Taking it a step further, 'save states' can be captured entirely with the CPU and memory handlers.

                              To put it slightly another way, if access to certain memory locations/hardware is not available through the memory handler (BAD!) then it's certainly accessible through uploading custom code through the memory handler, running it with the CPU handler, and dumping out the results with the memory handler, again. Where the custom code only reads the requested data into easily accessible memory.

                              I haven't speced out any particulars of the "init" diagnostic operation, but I'll be taking Ugetab's earlier comments and suggestions into account. The way it defines CPU/memory maps (in the case of non-predefined architectures) will probably define how the CPU/memory handler groups will function. (If you have two CPUs, you want to be sure you're talking to the right one. And then you want to be sure that you understand how the memory map looks to that particular CPU -- which will probably be different from what the other one sees.)

                              That's about it for now. But, being aware that I'm human and make mistakes, I can't help but feel like I've forgotten some big requirements. Comments?
                              Last edited by Parasyte; 05-01-2009, 08:05:46 PM.

                              Comment

                              Working...
                              X