Compare commits

..

16 commits

Author SHA1 Message Date
311103e82e
Add (limited) pre-commit
we check reuse & end of line only, not pylint or black

license information for python files based on the git commit history
of passport.py
2022-04-15 10:53:53 -05:00
60a95f804f
Rename the program, make it installable via pip 2022-04-15 10:41:40 -05:00
4f6f4010ae
Rename the program, make it installable via pip 2022-04-15 10:38:25 -05:00
6b4233302a
Add --output-dir 2022-04-15 08:59:33 -05:00
50df953d9d
improve argument handling
can now batch-convert!
2022-04-15 08:50:37 -05:00
92bb87bdaf
remove unused strings 2022-04-15 08:50:28 -05:00
36320957b2
Move a main program where they go by py3 convention 2022-04-15 08:32:00 -05:00
9401a70cad
Remove most of what is not rawconvert 2022-04-15 08:31:02 -05:00
e5f22806ce
remove old code copies 2022-04-15 08:04:53 -05:00
015bd67fff
explain what we're up to 2022-04-14 14:33:02 -05:00
9542850df5
WIP time to tidy 2022-04-14 14:09:11 -05:00
b1e89a4d1b
WIP 2022-04-14 08:52:33 -05:00
309ed70ce7
WIP 2022-04-11 10:30:22 -05:00
9b625ec118
WIP rawconvert has converted a (non-protected) fluxengine a2r to woz 2022-04-02 09:21:53 -05:00
a4060ed7c3
Correctly write physical sector at the end of the track
In particular, `dos33.a2r` linked in
https://github.com/a2-4am/passport.py/issues/3
ends up with sector 0 having a start of 49255 and an end of 3295.

This fixes `dos33.a2r` but not `Copy II Plus Parameter Disk - Disk 1, Side A.a2r`.
Its .woz file will now `passport.py verify` but still doesn't boot
in an emulator. (in fact, it may have passed `verify woz` before and my
report otherwise was incorrect)
2022-03-23 12:39:33 -05:00
11c93c1f2b
find length of bits in the usual way 2022-03-23 08:51:51 -05:00
58 changed files with 478 additions and 2858 deletions

7
.gitignore vendored
View file

@ -1 +1,8 @@
# SPDX-FileCopyrightText: 2022 Kattni Rembor, written for Adafruit Industries
#
# SPDX-License-Identifier: MIT
__pycache__
*.egg-info
a2woz/__version__.py
build

14
.pre-commit-config.yaml Normal file
View file

@ -0,0 +1,14 @@
# SPDX-FileCopyrightText: 2020 Diego Elio Pettenò
#
# SPDX-License-Identifier: Unlicense
repos:
- repo: https://github.com/fsfe/reuse-tool
rev: v0.12.1
hooks:
- id: reuse
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.3.0
hooks:
- id: check-yaml
- id: end-of-file-fixer

9
LICENSES/MIT.txt Normal file
View file

@ -0,0 +1,9 @@
MIT License
Copyright (c) <year> <copyright holders>
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

10
LICENSES/Unlicense.txt Normal file
View file

@ -0,0 +1,10 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means.
In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain. We make this dedication for the benefit of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <http://unlicense.org/>

View file

@ -1 +1,71 @@
# passport.py
<!--
SPDX-FileCopyrightText: 2022 Jeff Epler for Adafruit Industries
SPDX-License-Identifier: MIT
-->
# a2woz - minimally process a2r files into woz files
## Usage
One-time installation:
```shell
pip install https://github.com/adafruit/a2woz
```
Convert a single file:
```shell
a2woz input.a2r
```
Convert multiple files, with output directory:
```shell
a2woz --output-dir out *.a2r
```
Full usage:
```shell
a2woz --help
```
## Theory of a2r to woz raw conversion:
The a2r file contains "1 and a fraction" revolutions, for each track. (It can actually contain multiple revolutions, but ignore that for now)
`a2woz` takes a revolution, then finds all the "sync points".
"sync points" are a sequence of 2 or more "FF36" or "FF40", which are used
by the floppy interface controller to synchronize with data on the floppy.
For each pair of sync points within some distance of the start of the capture,
and some distance of the "estimated bit length" of the capture, find the
similarity measure. A similarity of 1.0 indicates that the next few thousand
bits (at least one full 256-byte sector) are an exact match; a similiary of
0.67 seems to happen for random/fake flux regions.
The pair of sync points with the best similarity is used as the "exact bit length" of the track.
Ties are broken by choosing the resulting track length that is closest to the estimated bit length.
Chop the flux after exactly this many bits, and write it to the output woz file.
That's about all there is to it.
This has worked for a small set of a2r files:
* Amnesia - Disk 1, Side A.a2r (4am from archive.org)
* DOS 3.3 System Master [1983] - Disk 1, Side A.a2r (cowgod from archive.org)
* skyfox.a2r (jepler from fluxengine)
## TODO
* Share with the world
* Try more a2rs
* Graft in the greaseweazle flux readers & use them as input formats
* Try different revolutions, if available in the a2r file, hopefully
finding a single best revolution
* Properly handle "weak bits" by locating stretches that look like they are
not valid flux (due to sequences of 3+ zeros), and setting all the bits in
the region to 0. A proper emulator then generates fake flux for these
sections of the track.
## Credits
a2woz is based on passport.py, a2rchery, and wozardry from [@a2-4am](https://github.com/a2-4am).

94
a2woz/__init__.py Executable file
View file

@ -0,0 +1,94 @@
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
from .loggers import *
from .strings import *
from .util import *
from . import wozardry
import bitarray
import io
import json
import os.path
import time
class PassportGlobals:
def __init__(self):
# things about the disk
self.track = 0 # display purposes only
self.sector = 0 # display purposes only
self.last_track = 0
self.filename = None
class BasePassportProcessor: # base class
def __init__(self, filename, disk_image, logger_class=DefaultLogger, output_filename=None):
self.g = PassportGlobals()
self.g.filename = filename
self.g.output_filename = output_filename
self.g.disk_image = disk_image
self.g.logger = logger_class(self.g)
self.rwts = None
self.output_tracks = {}
self.burn = 0
if self.preprocess():
if self.run():
self.postprocess()
def preprocess(self):
return True
def run(self):
return True
def postprocess(self):
pass
class RawConvert(BasePassportProcessor):
def run(self):
self.g.logger.PrintByID("reading", {"filename":self.g.filename})
self.tracks = {}
# main loop - loop through disk from track $22 down to track $00
for logical_track_num in range(0x22, -1, -1):
self.g.track = logical_track_num # for display purposes only
self.g.logger.debug("Seeking to track %s" % hex(self.g.track))
for fractional_track in (0, .25, .5, .75):
physical_track_num = logical_track_num + fractional_track
track = self.g.disk_image.seek(physical_track_num)
if track and track.bits:
track.fix()
self.g.logger.debug("Writing to track %s + %.2f for %d bits" % (hex(self.g.track), fractional_track, len(track.bits)))
self.output_tracks[physical_track_num] = wozardry.Track(track.bits, len(track.bits))
return True
def postprocess(self):
output_filename = self.g.output_filename
if output_filename is None:
source_base, source_ext = os.path.splitext(self.g.filename)
output_filename = source_base + '.woz'
self.g.logger.PrintByID("writing", {"filename":output_filename})
woz_image = wozardry.WozDiskImage()
json_string = self.g.disk_image.to_json()
woz_image.from_json(json_string)
j = json.loads(json_string)
root = [x for x in j.keys()].pop()
woz_image.info["creator"] = STRINGS["header"].strip()[:32]
woz_image.info["synchronized"] = j[root]["info"]["synchronized"]
woz_image.info["cleaned"] = True #self.g.found_and_cleaned_weakbits
woz_image.info["write_protected"] = j[root]["info"]["write_protected"]
woz_image.meta["image_date"] = time.strftime("%Y-%m-%dT%H:%M:%S.000Z", time.gmtime())
for q in range(1 + (0x23 * 4)):
physical_track_num = q / 4
if physical_track_num in self.output_tracks:
woz_image.add_track(physical_track_num, self.output_tracks[physical_track_num])
try:
wozardry.WozDiskImage(io.BytesIO(bytes(woz_image)))
except Exception as e:
raise Exception from e
with open(output_filename, 'wb') as f:
f.write(bytes(woz_image))

70
a2woz/__main__.py Executable file
View file

@ -0,0 +1,70 @@
#!/usr/bin/env python3
# SPDX-FileCopyrightText: 2019 4am
# SPDX-FileCopyrightText: 2019 Peter Ferrie
#
# SPDX-License-Identifier: MIT
# (c) 2018-9 by 4am
# MIT-licensed
import click
from . import eddimage, wozardry, a2rimage
from .loggers import DefaultLogger, DebugLogger
from . import RawConvert, PassportGlobals
from .strings import STRINGS, version
import argparse
import os.path
__progname__ = "a2woz"
@click.command()
@click.help_option()
@click.version_option(version=version)
@click.option("--debug", "-d", is_flag="True", help="print debugging information while processing")
@click.option("--output-dir", "output_dir", type=click.Path(file_okay=False, dir_okay=True), default=None, help="Output directory")
@click.option("--output", "-o", "output_file", type=click.Path(), default=None, help="Output path, defaults to the input with the extension replaced with .woz. When multiple input files are specified, --output may not be used.")
@click.option("--overwrite/--no-overwrite", "-f/-n", default=False, help="Controls whether to overwrite an output file. Files are not overwritten by default.")
@click.argument("input-files", type=click.Path(exists=True), nargs=-1)
def main(debug, input_files, output_file, output_dir, overwrite):
"Convert a disk image to .woz format with minimal processing"
logger = debug and DebugLogger or DefaultLogger
if output_file is not None and output_dir is not None:
raise SystemExit("--output and --output-dir are mutually exclusive")
if len(input_files) == 1:
if output_file is None:
output_file = os.path.splitext(input_files[0])[0] + ".woz"
if not overwrite:
if os.path.exists(output_file):
raise SystemExit(f"Use --overwrite to overwrite {output_file}.")
elif output_file is not None:
raise SystemExit(f"--output is only valid with one input file")
if output_dir is not None:
os.makedirs(output_dir, exist_ok=True)
logger = DebugLogger if debug else DefaultLogger
logger(PassportGlobals()).PrintByID("header")
for input_file in input_files:
base, ext = os.path.splitext(input_file)
ext = ext.lower()
if ext == ".edd":
reader = eddimage.EDDReader
elif ext == ".a2r":
reader = a2rimage.A2RImage
else:
raise SystemExit("unrecognized file type")
if output_dir:
output_file = os.path.join(output_dir, os.path.splitext(os.path.basename(input_file))[0] + ".woz")
with open(input_file, "rb") as f:
RawConvert(input_file, reader(f), logger, output_file)
if __name__ == '__main__':
main()

View file

@ -1,5 +1,9 @@
#!/usr/bin/env python3
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
# (c) 2018 by 4am
# MIT-licensed

View file

@ -1,5 +1,9 @@
from passport.wozardry import Track, raise_if
from passport import a2rchery
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
from .wozardry import Track, raise_if
from . import a2rchery
import bitarray
import collections
@ -9,7 +13,15 @@ class A2RImage:
def __init__(self, iostream):
self.tracks = collections.OrderedDict()
self.a2r_image = a2rchery.A2RReader(stream=iostream)
self.speed = 32
self._speed = 32
@property
def speed(self):
if self._speed is None:
fluxxen = flux_record["data"][1:]
speeds = [(len([1 for i in fluxxen[:8192] if i%t==0]), t) for t in range(0x1e,0x23)]
self._speed = speeds[-1][1]
return self._speed
def to_json(self):
return self.a2r_image.to_json()
@ -19,14 +31,13 @@ class A2RImage:
bits = bitarray.bitarray()
if not flux_record or flux_record["capture_type"] != a2rchery.kCaptureTiming:
return bits
tick_count = flux_record['tick_count']
fluxxen = flux_record["data"][1:]
if not self.speed:
speeds = [(len([1 for i in fluxxen[:8192] if i%t==0]), t) for t in range(0x1e,0x23)]
speeds.sort()
self.speed = speeds[-1][1]
speed = self.speed
flux_total = flux_start = -speed//2
rev_total = 0
for flux_value in fluxxen:
rev_total += flux_value
flux_total += flux_value
if flux_value == 0xFF:
continue
@ -34,6 +45,9 @@ class A2RImage:
bits.extend("0" * (flux_total // speed))
bits.extend("1")
flux_total = flux_start
# if rev_total > tick_count:
# print(f"bailing out at {rev_total}")
# break
return bits
def seek(self, track_num):
@ -50,8 +64,13 @@ class A2RImage:
# which is smarter but takes longer)
bits = bitarray.bitarray()
if location in self.a2r_image.flux:
bits = self.to_bits(self.a2r_image.flux[location][0])
self.tracks[location] = Track(bits, len(bits))
global flux_
flux_record = self.a2r_image.flux[location][0]
bits = self.to_bits(flux_record)
est_bit_len = round(flux_record['tick_count'] / self.speed)
else:
est_bit_len = None
self.tracks[location] = Track(bits, len(bits), est_bit_len)
return self.tracks[location]
def reseek(self, track_num):

View file

@ -1,4 +1,8 @@
from passport.wozardry import Track, raise_if
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
from .wozardry import Track, raise_if
import bitarray
import json

View file

@ -1,3 +1,7 @@
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
class BaseLogger: # base class
def __init__(self, g):
self.g = g

View file

@ -1,4 +1,8 @@
from passport.loggers.default import DefaultLogger
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
from .default import DefaultLogger
import sys
class DebugLogger(DefaultLogger):

View file

@ -1,5 +1,9 @@
from passport.loggers import BaseLogger
from passport.strings import STRINGS
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
from . import BaseLogger
from ..strings import STRINGS
import sys
class DefaultLogger(BaseLogger):

8
a2woz/loggers/silent.py Normal file
View file

@ -0,0 +1,8 @@
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
from . import BaseLogger
class SilentLogger(BaseLogger):
"""print nothing"""

15
a2woz/strings.py Normal file
View file

@ -0,0 +1,15 @@
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
try:
from .__version__ import version
except:
version = "UNKNOWN"
_header = "a2woz " + version
STRINGS = {
"header": f"a2woz {version}\n",
"reading": "Reading from {filename}\n",
"writing": "Writing to {filename}\n",
}

View file

@ -1,3 +1,7 @@
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
__all__ = ["find", "decode44", "concat_track"]
def decode44(n1, n2):

View file

@ -1,3 +1,7 @@
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
import re
WILDCARD = b'\x97'

View file

@ -1,5 +1,9 @@
#!/usr/bin/env python3
# SPDX-FileCopyrightText: 2019 4am
#
# SPDX-License-Identifier: MIT
# (c) 2018-9 by 4am
# MIT-licensed
@ -11,6 +15,7 @@ import io
import json
import itertools
import os
import re
import sys
__version__ = "2.0-beta" # https://semver.org
@ -143,14 +148,75 @@ def from_intish(v, errorClass, errorString):
def raise_if(cond, e, s=""):
if cond: raise e(s)
sync_rx = re.compile(r"(1111111100?){2,}")
FF40x5 = bitarray.bitarray('1111111100' * 5)
FF36x8 = bitarray.bitarray('111111110' * 8)
def bitarray_count_occurrence(haystack, needle):
start = 0
count = 0
while start != -1:
start = haystack.find(needle, start)
if start != -1:
count += 1
start += 1
return count
def find_all_sync(haystack, pos=0, endpos=sys.maxsize):
haystack = haystack.to01()
return sync_rx.finditer(haystack, pos, endpos)
class Track:
def __init__(self, bits, bit_count):
def __init__(self, bits, bit_count, est_bit_len=None):
self.bits = bits
while len(self.bits) > bit_count:
self.bits.pop()
self.bit_count = bit_count
self.bit_index = 0
self.revolutions = 0
self.fixed = False
self.est_bit_len = est_bit_len
def fix(self, max_match_dist=8000, match_range=4000):
if self.fixed:
return
# A 300RPM floppy drive has room for 50,000 bits at 4us, so if
# the estimated bit length is not known, assume it.
est_bit_len = self.est_bit_len or 50000
# Find all possible sync points. These are any combination of two
# or more of the FF36 / FF40 codes.
sync_pos = [match.start() for match in find_all_sync(self.bits)]
# Go through all possible splice pairs and find the best match.
# They're ranked first on the number of bit matches; in the case of
# a tie, the one with length closest to the nominal length is preferred
# and then the one closest to the start of the revolution
splice_points = [((0,0,0), 0, est_bit_len)]
for p1 in sync_pos:
if p1 > max_match_dist: continue
ref_range = self.bits[p1:p1+match_range]
for p2 in sync_pos:
if abs(p2-self.est_bit_len) > max_match_dist: continue
comp_range = self.bits[p2:p2+match_range]
similarity = sum(a == b for a, b in zip(ref_range, comp_range)) / match_range
abs_len_diff = abs((p2-p1) - est_bit_len)
splice_points.append(((similarity,-abs_len_diff,-p1), p1, p2))
splice_points.sort()
score, p1, p2 = splice_points[-1]
wrap_point = p2 - p1
del self.bits[p2:]
del self.bits[:p1]
while self.bit_index > wrap_point:
self.bit_index -= wrap_point
self.revolutions += 1
self.fixed = True
self.est_bit_len = wrap_point
def bit(self):
b = self.bits[self.bit_index] and 1 or 0
@ -522,8 +588,8 @@ class WozDiskImage:
compatible_hardware_raw = to_uint16(compatible_hardware_bitfield)
required_ram_raw = to_uint16(self.info["required_ram"])
if self.tracks:
largest_bit_count = max([track.bit_count for track in self.tracks])
largest_block_count = (((largest_bit_count+7)//8)+511)//512
largest_bit_len = max([track.bit_count for track in self.tracks])
largest_block_count = (((largest_bit_len+7)//8)+511)//512
else:
largest_block_count = 0
largest_track_raw = to_uint16(largest_block_count)
@ -578,7 +644,7 @@ class WozDiskImage:
block_size = len(padded_bytes) // 512
starting_block += block_size
trk_chunk.extend(to_uint16(block_size))
trk_chunk.extend(to_uint32(track.bits.length()))
trk_chunk.extend(to_uint32(len(track.bits)))
bits_chunk.extend(padded_bytes)
for i in range(len(self.tracks), 160):
trk_chunk.extend(to_uint16(0))

View file

@ -1,86 +0,0 @@
#!/usr/bin/env python3
# (c) 2018-9 by 4am
# MIT-licensed
from passport import eddimage, wozardry, a2rimage
from passport.loggers import DefaultLogger, DebugLogger
from passport import Crack, Verify, Convert
from passport.strings import __date__, STRINGS
import argparse
import os.path
__version__ = "0.2" # https://semver.org/
__progname__ = "passport"
class BaseCommand:
def __init__(self, name):
self.name = name
self.logger = None
self.reader = None
self.processor = None
def setup(self, subparser, description=None, epilog=None, help="disk image (.a2r, .woz, .edd)", formatter_class=argparse.HelpFormatter):
self.parser = subparser.add_parser(self.name, description=description, epilog=epilog, formatter_class=formatter_class)
self.parser.add_argument("file", help=help)
self.parser.set_defaults(action=self)
def __call__(self, args):
if not self.processor: return
if not self.reader:
base, ext = os.path.splitext(args.file)
ext = ext.lower()
if ext == ".woz":
self.reader = wozardry.WozDiskImage
elif ext == ".edd":
self.reader = eddimage.EDDReader
elif ext == ".a2r":
self.reader = a2rimage.A2RImage
else:
print("unrecognized file type")
if not self.logger:
self.logger = args.debug and DebugLogger or DefaultLogger
with open(args.file, "rb") as f:
self.processor(args.file, self.reader(f), self.logger)
class CommandVerify(BaseCommand):
def __init__(self):
BaseCommand.__init__(self, "verify")
self.processor = Verify
def setup(self, subparser):
BaseCommand.setup(self, subparser,
description="Verify track structure and sector data in a disk image")
class CommandConvert(BaseCommand):
def __init__(self):
BaseCommand.__init__(self, "convert")
self.processor = Convert
def setup(self, subparser):
BaseCommand.setup(self, subparser,
description="Convert a disk image to .woz format")
class CommandCrack(BaseCommand):
def __init__(self):
BaseCommand.__init__(self, "crack")
self.processor = Crack
def setup(self, subparser):
BaseCommand.setup(self, subparser,
description="Convert a disk image to .dsk format")
if __name__ == "__main__":
cmds = [CommandVerify(), CommandConvert(), CommandCrack()]
parser = argparse.ArgumentParser(prog=__progname__,
description="""A multi-purpose tool for working with copy-protected Apple II disk images.
See '""" + __progname__ + """ <command> -h' for help on individual commands.""",
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument("-v", "--version", action="version", version=STRINGS["header"])
parser.add_argument("-d", "--debug", action="store_true", help="print debugging information while processing")
sp = parser.add_subparsers(dest="command", help="command")
for command in cmds:
command.setup(sp)
args = parser.parse_args()
args.action(args)

1
passport/.gitignore vendored
View file

@ -1 +0,0 @@
__pycache__

View file

@ -1,761 +0,0 @@
from passport.loggers import *
from passport.rwts import *
from passport.patchers import *
from passport.strings import *
from passport.constants import *
from passport.util import *
from passport import wozardry
import bitarray
import io
import json
import os.path
import time
class PassportGlobals:
def __init__(self):
# things about the disk
self.is_boot0 = False
self.is_boot1 = False
self.is_master = False
self.is_rwts = False
self.is_dos32 = False
self.is_prodos = False
self.is_dinkeydos = False
self.is_pascal = False
self.is_protdos = False
self.is_daviddos = False
self.is_ea = False
self.possible_gamco = False
self.is_optimum = False
self.is_mecc_fastloader = False
self.mecc_variant = 0
self.possible_d5d5f7 = False
self.is_8b3 = False
self.is_milliken1 = False
self.is_adventure_international = False
self.is_laureate = False
self.is_datasoft = False
self.is_sierra = False
self.is_sierra13 = False
self.is_f7f6 = False
self.is_trillium = False
self.polarware_tamper_check = False
self.force_disk_vol = False
self.captured_disk_volume_number = False
self.disk_volume_number = None
self.found_and_cleaned_weakbits = False
self.protection_enforces_write_protected = False
# things about the conversion process
self.tried_univ = False
self.track = 0 # display purposes only
self.sector = 0 # display purposes only
self.last_track = 0
self.filename = None
class BasePassportProcessor: # base class
def __init__(self, filename, disk_image, logger_class=DefaultLogger):
self.g = PassportGlobals()
self.g.filename = filename
self.g.disk_image = disk_image
self.g.logger = logger_class(self.g)
self.rwts = None
self.output_tracks = {}
self.patchers = []
self.patches_found = []
self.patch_count = 0 # number of patches found across all tracks
self.patcher_classes = [
SunburstPatcher,
#JMPBCF0Patcher,
#JMPBEB1Patcher,
#JMPBECAPatcher,
#JMPB660Patcher,
#JMPB720Patcher,
BadEmuPatcher,
BadEmu2Patcher,
RWTSPatcher,
#RWTSLogPatcher,
MECC1Patcher,
MECC2Patcher,
MECC3Patcher,
MECC4Patcher,
#ROL1EPatcher,
#JSRBB03Patcher,
#DavidBB03Patcher,
#RWTSSwapPatcher,
#RWTSSwap2Patcher,
BorderPatcher,
#JMPAE8EPatcher,
#JMPBBFEPatcher,
#DatasoftPatcher,
#NibTablePatcher,
#DiskVolPatcher,
#C9FFPatcher,
#MillikenPatcher,
#MethodsPatcher,
#JSR8B3Patcher,
#LaureatePatcher,
#PascalRWTSPatcher,
#MicrogramsPatcher,
#DOS32Patcher,
#DOS32DLMPatcher,
MicrofunPatcher,
#T11DiskVolPatcher,
#T02VolumeNamePatcher,
UniversalE7Patcher,
A6BC95Patcher,
A5CountPatcher,
D5D5F7Patcher,
#ProDOSRWTSPatcher,
#ProDOS6APatcher,
#ProDOSMECCPatcher,
BBF9Patcher,
#MemoryConfigPatcher,
#OriginPatcher,
#RWTSSwapMECCPatcher,
#ProtectedDOSPatcher,
#FBFFPatcher,
#FBFFEncryptedPatcher,
#PolarwarePatcher,
#SierraPatcher,
#CorrupterPatcher,
#EAPatcher,
#GamcoPatcher,
#OptimumPatcher,
BootCounterPatcher,
#JMPB412Patcher,
#JMPB400Patcher,
AdventureInternationalPatcher,
#JSR8635Patcher,
#JMPB4BBPatcher,
#DOS32MUSEPatcher,
#SRAPatcher,
#Sierra13Patcher,
#SSPROTPatcher,
#F7F6Patcher,
#TrilliumPatcher,
]
self.burn = 0
if self.preprocess():
if self.run():
self.postprocess()
def SkipTrack(self, logical_track_num, track):
# don't look for whole-track protections on track 0, that's silly
if logical_track_num == 0: return False
# Missing track?
if not track.bits:
self.g.logger.PrintByID("unformat")
return True
# Electronic Arts protection track?
if logical_track_num == 6:
if self.rwts.find_address_prologue(track):
address_field = self.rwts.address_field_at_point(track)
if address_field and address_field.track_num == 5: return True
# Nibble count track?
repeated_nibble_count = 0
start_revolutions = track.revolutions
last_nibble = 0x00
while (repeated_nibble_count < 512 and track.revolutions < start_revolutions + 2):
n = next(track.nibble())
if n == last_nibble:
repeated_nibble_count += 1
else:
repeated_nibble_count = 0
last_nibble = n
if repeated_nibble_count == 512:
self.g.logger.PrintByID("sync")
return True
# TODO IsUnformatted nibble test and other tests
# (still need these for disks like Crime Wave and Thunder Bombs)
return False
def IDDiversi(self, t00s00):
"""returns True if T00S00 is Diversi-DOS bootloader, or False otherwise"""
return find.at(0xF1, t00s00, kIDDiversiDOSBootloader)
def IDProDOS(self, t00s00):
"""returns True if T00S00 is ProDOS bootloader, or False otherwise"""
return find.at(0x00, t00s00, kIDProDOSBootloader)
def IDPascal(self, t00s00):
"""returns True if T00S00 is Pascal bootloader, or False otherwise"""
return find.wild_at(0x00, t00s00, kIDPascalBootloader1) or \
find.at(0x00, t00s00, kIDPascalBootloader2)
def IDDavidDOS(self, t00s00):
"""returns True if T00S00 is David-DOS II bootloader, or False otherwise"""
return find.at(0x01, t00s00, kIDDavidDOS1) and \
find.wild_at(0x4A, t00s00, kIDDavidDOS2)
def IDDatasoft(self, t00s00):
"""returns True if T00S00 is encrypted Datasoft bootloader, or False otherwise"""
return find.at(0x00, t00s00, kIDDatasoft)
def IDMicrograms(self, t00s00):
"""returns True if T00S00 is Micrograms bootloader, or False otherwise"""
return find.at(0x01, t00s00, kIDMicrograms1) and \
find.at(0x42, t00s00, kIDMicrograms2)
def IDQuickDOS(self, t00s00):
"""returns True if T00S00 is Quick-DOS bootloader, or False otherwise"""
return find.at(0x01, t00s00, kIDQuickDOS)
def IDRDOS(self, t00s00):
"""returns True if T00S00 is Quick-DOS bootloader, or False otherwise"""
return find.at(0x00, t00s00, kIDRDOS)
def IDDOS33(self, t00s00):
"""returns True if T00S00 is DOS bootloader or some variation
that can be safely boot traced, or False otherwise"""
# Code at $0801 must be standard (with one exception)
if not find.wild_at(0x00, t00s00, kIDDOS33a):
return False
# DOS 3.3 has JSR $FE89 / JSR $FE93 / JSR $FB2F
# some Sierra have STA $C050 / STA $C057 / STA $C055 instead
# with the unpleasant side-effect of showing text-mode garbage
# if mixed-mode was enabled at the time
if not find.at(0x3F, t00s00,
b'\x20\x89\xFE'
b'\x20\x93\xFE'
b'\x20\x2F\xFB'
b'\xA6\x2B'):
if not find.at(0x3F, t00s00,
b'\x8D\x50\xC0'
b'\x8D\x57\xC0'
b'\x8D\x55\xC0'
b'\xA6\x2B'): return False
# Sector order map must be standard (no exceptions)
if not find.at(0x4D, t00s00,
b'\x00\x0D\x0B\x09\x07\x05\x03\x01'
b'\x0E\x0C\x0A\x08\x06\x04\x02\x0F'): return False
# standard code at $081C -> success & done
if find.at(0x1C, t00s00,
b'\x8D\xFE\x08'): return True
# Minor variant (e.g. Terrapin Logo 3.0) jumps to $08F0 and back
# but is still safe to trace. Check for this jump and match
# the code at $08F0 exactly.
# unknown code at $081C -> failure
if not find.at(0x1C, t00s00,
b'\x4C\xF0\x08'): return False
# unknown code at $08F0 -> failure, otherwise success & done
return find.at(0xF0, t00s00,
b'\x8D\xFE\x08'
b'\xEE\xF3\x03'
b'\x4C\x1F\x08')
def IDPronto(self, t00s00):
"""returns True if T00S00 is Pronto-DOS bootloader, or False otherwise"""
return find.at(0x5E, t00s00,
b'\xB0\x50'
b'\xAD\xCB\xB5'
b'\x85\x42')
def IDLaureate(self, t00s00):
"""returns True if T00S00 is Laureate bootloader, or False otherwise"""
if not find.at(0x2E, t00s00,
b'\xAE\xFF\x08'
b'\x30\x1E'
b'\xE0\x02'
b'\xD0\x05'
b'\xA9\xBF'
b'\x8D\xFE\x08'): return False
return find.at(0xF8, t00s00,
b'\x4C\x00\xB7'
b'\x00\x00\x00\xFF\x0B')
def IDMECC(self, t00s00):
"""returns True if T00S00 is MECC bootloader, or False otherwise"""
return find.at(0x00, t00s00,
b'\x01'
b'\x4C\x1A\x08'
b'\x17\x0F\x00'
b'\x00\x0D\x0B\x09\x07\x05\x03\x01'
b'\x0E\x0C\x0A\x08\x06\x04\x02\x0F')
def IDMECCVariant(self, logical_sectors):
"""returns int (1-4) of MECC bootloader variant, or 0 if no known variant is detected"""
# variant 1 (labeled "M8" on original disks)
if find.wild_at(0x02, logical_sectors[0x0B],
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9' + find.WILDCARD + \
b'\xD0\xEF'
b'\xEA'
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9' + find.WILDCARD + \
b'\xD0\xE5'
b'\xA0\x03'
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9'):
if find.wild_at(0x89, logical_sectors[0x0B],
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9' + find.WILDCARD + \
b'\xD0\xF4'
b'\xEA'
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9' + find.WILDCARD + \
b'\xD0\xF2'
b'\xEA'
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9'):
return 1
# variant 2 (labeled "M7" on original disks)
m7a = b'\xBD\x8C\xC0' \
b'\x10\xFB' \
b'\xC9' + find.WILDCARD + \
b'\xD0\xF0' \
b'\xEA' \
b'\xBD\x8C\xC0' \
b'\x10\xFB' \
b'\xC9' + find.WILDCARD + \
b'\xD0\xF2' \
b'\xA0\x03' \
b'\xBD\x8C\xC0' \
b'\x10\xFB' \
b'\xC9'
m7b = b'\xBD\x8C\xC0' \
b'\x10\xFB' \
b'\x49'
m7c = b'\xEA' \
b'\xBD\x8C\xC0' \
b'\x10\xFB' \
b'\xC9' + find.WILDCARD + \
b'\xD0\xF2' \
b'\xA0\x56' \
b'\xBD\x8C\xC0' \
b'\x10\xFB' \
b'\xC9'
if find.wild_at(0x7D, logical_sectors[7], m7a):
if find.at(0x0F, logical_sectors[7], m7b):
if find.wild_at(0x18, logical_sectors[7], m7c):
return 2
# variant 3 ("M7" variant found in Word Muncher 1.1 and others)
if find.wild_at(0xE2, logical_sectors[0x0A],
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9' + find.WILDCARD + \
b'\xD0\xEF'
b'\xEA'
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9' + find.WILDCARD + \
b'\xD0\xF2'
b'\xA0\x03'
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9'):
if find.wild_at(0x69, logical_sectors[0x0B],
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9' + find.WILDCARD + \
b'\xD0\xF4'
b'\xEA'
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9' + find.WILDCARD + \
b'\xD0\xF2'
b'\xEA'
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9'):
return 3
# variant 4 (same as variant 2 but everything is on sector 8 instead of 7)
if find.wild_at(0x7D, logical_sectors[8], m7a):
if find.at(0x0F, logical_sectors[8], m7b):
if find.wild_at(0x18, logical_sectors[8], m7c):
return 2
return 0 # unknown variant
def IDSunburst(self, logical_sectors):
"""returns True if |logical_sectors| contains track 0 of a Sunburst disk, False otherwise"""
if 4 not in logical_sectors:
return False
return find.wild_at(0x69, logical_sectors[0x04],
bytes.fromhex("48"
"A5 2A"
"4A"
"A8"
"B9 29 BA"
"8D 6A B9"
"8D 84 BC"
"B9 34 BA"
"8D FC B8"
"8D 5D B8"
"C0 11"
"D0 03"
"A9 02"
"AC"
"A9 0E"
"8D C0 BF"
"68"
"69 00"
"48"
"AD 78 04"
"90 2B"))
def IDBootloader(self, t00, suppress_errors=False):
"""returns RWTS object that can (hopefully) read the rest of the disk"""
temporary_rwts_for_t00 = Track00RWTS(self.g)
physical_sectors = temporary_rwts_for_t00.decode_track(t00, 0)
if 0 not in physical_sectors:
if not suppress_errors:
self.g.logger.PrintByID("fail")
self.g.logger.PrintByID("fatal0000")
return None
t00s00 = physical_sectors[0].decoded
logical_sectors = temporary_rwts_for_t00.reorder_to_logical_sectors(physical_sectors)
if self.IDDOS33(t00s00):
self.g.is_boot0 = True
if self.IDDiversi(t00s00):
self.g.logger.PrintByID("diversidos")
elif self.IDPronto(t00s00):
self.g.logger.PrintByID("prontodos")
else:
self.g.logger.PrintByID("dos33boot0")
if border.BorderPatcher(self.g).run(logical_sectors, 0):
return BorderRWTS(logical_sectors, self.g)
if self.IDSunburst(logical_sectors):
self.g.logger.PrintByID("sunburst")
return SunburstRWTS(logical_sectors, self.g)
return self.TraceDOS33(logical_sectors)
# TODO JSR08B3
if self.IDMECC(t00s00):
self.g.is_mecc_fastloader = True
self.g.logger.PrintByID("mecc")
mecc_variant = self.IDMECCVariant(logical_sectors)
self.g.logger.debug("mecc_variant = %d" % mecc_variant)
if mecc_variant:
return MECCRWTS(mecc_variant, logical_sectors, self.g)
# TODO DOS 3.3P
if self.IDLaureate(t00s00):
self.g.logger.PrintByID("laureate")
return LaureateRWTS(logical_sectors, self.g)
# TODO Electronic Arts
# TODO DOS 3.2
# TODO IDEncoded44
# TODO IDEncoded53
self.g.is_prodos = self.IDProDOS(t00s00)
if self.g.is_prodos:
# TODO IDVolumeName
# TODO IDDinkeyDOS
pass
self.g.is_pascal = self.IDPascal(t00s00)
self.g.is_daviddos = self.IDDavidDOS(t00s00)
self.g.is_datasoft = self.IDDatasoft(t00s00)
self.g.is_micrograms = self.IDMicrograms(t00s00)
self.g.is_quickdos = self.IDQuickDOS(t00s00)
self.g.is_rdos = self.IDRDOS(t00s00)
return self.StartWithUniv()
def TraceDOS33(self, logical_sectors):
"""returns RWTS object"""
use_builtin = False
# check that all the sectors of the RWTS were actually readable
for i in range(1, 10):
if i not in logical_sectors:
use_builtin = True
break
# TODO handle Protected.DOS here
if not use_builtin:
# check for "STY $48;STA $49" at RWTS entry point ($BD00)
use_builtin = not find.at(0x00, logical_sectors[7], b'\x84\x48\x85\x49')
if not use_builtin:
# check for "SEC;RTS" at $B942
use_builtin = not find.at(0x42, logical_sectors[3], b'\x38\x60')
if not use_builtin:
# check for "LDA $C08C,X" at $B94F
use_builtin = not find.at(0x4F, logical_sectors[3], b'\xBD\x8C\xC0')
if not use_builtin:
# check for "JSR $xx00" at $BDB9
use_builtin = not find.at(0xB9, logical_sectors[7], b'\x20\x00')
if not use_builtin:
# check for RWTS variant that has extra code before
# JSR $B800 e.g. Verb Viper (DLM), Advanced Analogies (Hartley)
use_builtin = find.at(0xC5, logical_sectors[7], b'\x20\x00')
if not use_builtin:
# check for RWTS variant that uses non-standard address for slot
# LDX $1FE8 e.g. Pinball Construction Set (1983)
use_builtin = find.at(0x43, logical_sectors[8], b'\xAE\xE8\x1F')
if not use_builtin:
# check for D5+timingbit RWTS
if find.at(0x59, logical_sectors[3], b'\xBD\x8C\xC0\xC9\xD5'):
self.g.logger.PrintByID("diskrwts")
return D5TimingBitRWTS(logical_sectors, self.g)
# TODO handle Milliken here
# TODO handle Adventure International here
if not use_builtin and (logical_sectors[0][0xFE] == 0x22):
return InfocomRWTS(logical_sectors, self.g)
if not use_builtin and (find.at(0xF4, logical_sectors[2],
b'\x4C\xCA') or
find.at(0xFE, logical_sectors[2],
b'\x4C\xCA')):
self.g.logger.PrintByID("jmpbeca")
return BECARWTS(logical_sectors, self.g)
if not use_builtin and (find.wild_at(0x5D, logical_sectors[0],
b'\x68'
b'\x85' + find.WILDCARD + \
b'\x68' + \
b'\x85' + find.WILDCARD + \
b'\xA0\x01' + \
b'\xB1' + find.WILDCARD + \
b'\x85\x54')):
self.g.logger.PrintByID("optimum")
return OptimumResourceRWTS(logical_sectors, self.g)
if not use_builtin and (find.wild_at(0x16, logical_sectors[5],
b'\xF0\x05'
b'\xA2\xB2'
b'\x4C\xF0\xBB'
b'\xBD\x8C\xC0'
b'\xA9' + find.WILDCARD + \
b'\x8D\x00\x02'
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9\xEB'
b'\xD0\xF7'
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9\xD5'
b'\xD0\xEE'
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9\xAA'
b'\xD0\xE5'
b'\xA9\x4C'
b'\xA0\x00'
b'\x99\x00\x95'
b'\x88'
b'\xD0\xFA'
b'\xCE\x46\xBB'
b'\xAD\x46\xBB'
b'\xC9\x07'
b'\xD0\xEC'
b'\xA9\x18'
b'\x8D\x42\xB9'
b'\xA9\x0A'
b'\x8D\xED\xB7'
b'\xD0\x05')):
self.g.logger.PrintByID("bb00")
if find.at(0x04, logical_sectors[5],
b'\xBD\x8D\xC0'
b'\xBD\x8E\xC0'
b'\x30\x05'
b'\xA2\xB1'
b'\x4C\xF0\xBB'):
self.g.protection_enforces_write_protected = True
return HeredityDogRWTS(logical_sectors, self.g)
if use_builtin:
return self.StartWithUniv()
self.g.logger.PrintByID("diskrwts")
return DOS33RWTS(logical_sectors, self.g)
def StartWithUniv(self):
"""return Universal RWTS object, log that we're using it, and set global flags appropriately"""
self.g.logger.PrintByID("builtin")
self.g.tried_univ = True
self.g.is_protdos = False
return UniversalRWTS(self.g)
def preprocess(self):
return True
def run(self):
self.g.logger.PrintByID("header")
self.g.logger.PrintByID("reading", {"filename":self.g.filename})
supports_reseek = ("reseek" in dir(self.g.disk_image))
# get raw track $00 data from the source disk
self.tracks = {}
self.tracks[0] = self.g.disk_image.seek(0)
# analyze track $00 to create an RWTS
self.rwts = self.IDBootloader(self.tracks[0], supports_reseek)
if not self.rwts and supports_reseek:
self.tracks[0] = self.g.disk_image.reseek(0)
self.rwts = self.IDBootloader(self.tracks[0])
if not self.rwts: return False
# initialize all patchers
for P in self.patcher_classes:
self.patchers.append(P(self.g))
# main loop - loop through disk from track $22 down to track $00
for logical_track_num in range(0x22, -1, -1):
self.g.track = logical_track_num # for display purposes only
self.g.logger.debug("Seeking to track %s" % hex(self.g.track))
# distinguish between logical and physical track numbers to deal with
# disks like Sunburst that store logical track 0x11+ on physical track 0x11.5+
physical_track_num = self.rwts.seek(logical_track_num)
# self.tracks must be indexed by physical track number so we can write out
# .woz files correctly
self.tracks[physical_track_num] = self.g.disk_image.seek(physical_track_num)
tried_reseek = False
physical_sectors = OrderedDict()
while True:
physical_sectors.update(self.rwts.decode_track(self.tracks[physical_track_num], logical_track_num, self.burn))
if self.rwts.enough(logical_track_num, physical_sectors):
break
if supports_reseek and not tried_reseek:
self.tracks[physical_track_num] = self.g.disk_image.reseek(physical_track_num)
self.g.logger.debug("Reseeking to track %s" % hex(self.g.track))
tried_reseek = True
continue
self.g.logger.debug("found %d sectors" % len(physical_sectors))
if (0x0F not in physical_sectors) and self.SkipTrack(logical_track_num, self.tracks[physical_track_num]):
physical_sectors = None
break
if self.g.tried_univ:
if logical_track_num == 0x22 and (0x0F not in physical_sectors):
self.g.logger.PrintByID("fail", {"sector":0x0F})
self.g.logger.PrintByID("fatal220f")
return False
else:
transition_sector = 0x0F
if physical_sectors:
temp_logical_sectors = self.rwts.reorder_to_logical_sectors(physical_sectors)
transition_sector = min(temp_logical_sectors.keys())
self.g.logger.PrintByID("switch", {"sector":transition_sector})
self.rwts = UniversalRWTS(self.g)
self.g.tried_univ = True
continue
if logical_track_num == 0 and type(self.rwts) != UniversalRWTSIgnoreEpilogues:
self.rwts = UniversalRWTSIgnoreEpilogues(self.g)
continue
self.g.logger.PrintByID("fail")
return False
self.save_track(physical_track_num, logical_track_num, physical_sectors)
return True
def save_track(self, physical_track_num, logical_track_num, physical_sectors):
pass
def apply_patches(self, logical_sectors, patches):
pass
class Verify(BasePassportProcessor):
def AnalyzeT00(self, logical_sectors):
self.g.is_boot1 = find.at(0x00, logical_sectors[1], kIDBoot1)
self.g.is_master = find.at(0x00, logical_sectors[1], kIDMaster)
self.g.is_rwts = find.wild_at(0x00, logical_sectors[7], kIDRWTS)
def save_track(self, physical_track_num, logical_track_num, physical_sectors):
if not physical_sectors: return {}
logical_sectors = self.rwts.reorder_to_logical_sectors(physical_sectors)
if self.rwts.enough(logical_track_num, physical_sectors):
# patchers operate on logical tracks
if logical_track_num == 0:
# set additional globals for patchers to use
self.AnalyzeT00(logical_sectors)
for patcher in self.patchers:
if patcher.should_run(logical_track_num):
patches = patcher.run(logical_sectors, logical_track_num)
if patches:
self.apply_patches(logical_sectors, patches)
self.patches_found.extend(patches)
return logical_sectors
def apply_patches(self, logical_sectors, patches):
for patch in patches:
if patch.id:
self.g.logger.PrintByID(patch.id, patch.params)
def postprocess(self):
self.g.logger.PrintByID("passver")
class Crack(Verify):
def save_track(self, physical_track_num, logical_track_num, physical_sectors):
# output_tracks is indexed on logical track number here because the
# point of cracking is normalizing to logical tracks and sectors
self.output_tracks[logical_track_num] = Verify.save_track(self, physical_track_num, logical_track_num, physical_sectors)
def apply_patches(self, logical_sectors, patches):
for patch in patches:
if patch.id:
self.g.logger.PrintByID(patch.id, patch.params)
if len(patch.new_value) > 0:
b = logical_sectors[patch.sector_num].decoded
patch.params["old_value"] = b[patch.byte_offset:patch.byte_offset+len(patch.new_value)]
patch.params["new_value"] = patch.new_value
self.g.logger.PrintByID("modify", patch.params)
for i in range(len(patch.new_value)):
b[patch.byte_offset + i] = patch.new_value[i]
logical_sectors[patch.sector_num].decoded = b
def postprocess(self):
source_base, source_ext = os.path.splitext(self.g.filename)
output_filename = source_base + '.dsk'
self.g.logger.PrintByID("writing", {"filename":output_filename})
with open(output_filename, "wb") as f:
for logical_track_num in range(0x23):
if logical_track_num in self.output_tracks:
f.write(concat_track(self.output_tracks[logical_track_num]))
else:
f.write(bytes(256*16))
if self.patches_found:
self.g.logger.PrintByID("passcrack")
else:
self.g.logger.PrintByID("passcrack0")
class Convert(BasePassportProcessor):
def preprocess(self):
self.burn = 2
return True
def save_track(self, physical_track_num, logical_track_num, physical_sectors):
track = self.tracks[physical_track_num]
if physical_sectors:
b = bitarray.bitarray(endian="big")
for s in physical_sectors.values():
b.extend(track.bits[s.start_bit_index:s.end_bit_index])
else:
# TODO call wozify here instead
b = track.bits[:51021]
# output_tracks is indexed on physical track number here because the
# point of .woz is to capture the physical layout of the original disk
self.output_tracks[physical_track_num] = wozardry.Track(b, len(b))
def postprocess(self):
source_base, source_ext = os.path.splitext(self.g.filename)
output_filename = source_base + '.woz'
self.g.logger.PrintByID("writing", {"filename":output_filename})
woz_image = wozardry.WozDiskImage()
json_string = self.g.disk_image.to_json()
woz_image.from_json(json_string)
j = json.loads(json_string)
root = [x for x in j.keys()].pop()
woz_image.info["creator"] = STRINGS["header"].strip()[:32]
woz_image.info["synchronized"] = j[root]["info"]["synchronized"]
woz_image.info["cleaned"] = True #self.g.found_and_cleaned_weakbits
woz_image.info["write_protected"] = self.g.protection_enforces_write_protected or j[root]["info"]["write_protected"]
woz_image.meta["image_date"] = time.strftime("%Y-%m-%dT%H:%M:%S.000Z", time.gmtime())
for q in range(1 + (0x23 * 4)):
physical_track_num = q / 4
if physical_track_num in self.output_tracks:
woz_image.add_track(physical_track_num, self.output_tracks[physical_track_num])
try:
wozardry.WozDiskImage(io.BytesIO(bytes(woz_image)))
except Exception as e:
raise Exception from e
with open(output_filename, 'wb') as f:
f.write(bytes(woz_image))

View file

@ -1,197 +0,0 @@
from passport.util import *
kIDBoot1 = bytes.fromhex(
"8E E9 B7"
"8E F7 B7"
"A9 01"
"8D F8 B7"
"8D EA B7"
"AD E0 B7"
"8D E1 B7"
"A9 02"
"8D EC B7"
"A9 04"
"8D ED B7"
"AC E7 B7"
"88"
"8C F1 B7"
"A9 01"
"8D F4 B7"
"8A"
"4A"
"4A"
"4A"
"4A"
"AA"
"A9 00"
"9D F8 04"
"9D 78 04")
kIDMaster = bytes.fromhex(
"8E E9 37"
"8E F7 37"
"A9 01"
"8D F8 37"
"8D EA 37"
"AD E0 37"
"8D E1 37"
"A9 02"
"8D EC 37"
"A9 04"
"8D ED 37"
"AC E7 37"
"88"
"8C F1 37"
"A9 01"
"8D F4 37"
"8A"
"4A"
"4A"
"4A"
"4A"
"AA"
"A9 00"
"9D F8 04"
"9D 78 04")
kIDRWTS = bytes.fromhex(
"84 48"
"85 49"
"A0 02"
"8C" + find.WILDSTR + find.WILDSTR + \
"A0 04"
"8C" + find.WILDSTR + find.WILDSTR + \
"A0 01"
"B1 48"
"AA"
"A0 0F"
"D1 48"
"F0 1B"
"8A"
"48"
"B1 48"
"AA"
"68"
"48"
"91 48"
"BD 8E C0"
"A0 08"
"BD 8C C0"
"DD 8C C0"
"D0 F6"
"88"
"D0 F8"
"68"
"AA"
"BD 8E C0"
"BD 8C C0"
"A0 08"
"BD 8C C0"
"48")
kIDDiversiDOSBootloader = bytes.fromhex("B3 A3 A0 D2 CF D2 D2 C5 8D 87 8D")
kIDProDOSBootloader = bytes.fromhex(
"01"
"38" # SEC
"B0 03" # BCS +3
"4C") # JMP
kIDPascalBootloader1 = bytes.fromhex(
"01"
"E0 60" # CPX #$60
"F0 03" # BEQ +3
"4C" + find.WILDSTR + "08") # JMP $08**
kIDPascalBootloader2 = bytes.fromhex(
"01"
"E0 70" # CPX #$70
"B0 04" # BCS +4
"E0 40" # CPX #$40
"B0") # BCS
kIDDavidDOS1 = bytes.fromhex(
"A5 27"
"C9 09"
"D0 17")
kIDDavidDOS2 = bytes.fromhex(
"A2" + find.WILDSTR + \
"BD" + find.WILDSTR + " 08" + \
"9D" + find.WILDSTR + " 04" + \
"CA"
"10 F7")
kIDDatasoft = bytes.fromhex(
"01 4C 7E 08 04 8A 0C B8"
"00 56 10 7A 00 00 1A 16"
"12 0E 0A 06 53 18 9A 02"
"10 1B 02 10 4D 56 15 0B"
"BF 14 14 54 54 54 92 81"
"1B 10 10 41 06 73 0A 10"
"33 4E 00 73 12 10 33 7C"
"00 11 20 E3 49 50 73 1A"
"10 41 00 23 80 5B 0A 10"
"0B 4E 9D 0A 10 9D 0C 10"
"60 1E 53 10 90 53 BC 90"
"53 00 90 D8 52 00 D8 7C"
"00 53 80 0B 06 41 00 09"
"04 45 0C 63 04 90 94 D0"
"D4 23 04 91 A1 EB CD 06"
"95 A1 E1 98 97 86")
kIDMicrograms1 = bytes.fromhex(
"A5 27"
"C9 09"
"D0 12"
"A9 C6"
"85 3F")
kIDMicrograms2 = bytes.fromhex("4C 00")
kIDQuickDOS = bytes.fromhex(
"A5 27"
"C9 09"
"D0 27"
"78"
"AD 83 C0")
kIDRDOS = bytes.fromhex(
"01"
"A9 60"
"8D 01 08"
"A2 00"
"A0 1F"
"B9 00 08"
"49")
kIDDOS33a = bytes.fromhex(
"01"
"A5 27"
"C9 09"
"D0 18"
"A5 2B"
"4A"
"4A"
"4A"
"4A"
"09 C0"
"85 3F"
"A9 5C"
"85 3E"
"18"
"AD FE 08"
"6D FF 08" + \
find.WILDSTR + find.WILDSTR + find.WILDSTR + \
"AE FF 08"
"30 15"
"BD 4D 08"
"85 3D"
"CE FF 08"
"AD FE 08"
"85 27"
"CE FE 08"
"A6 2B"
"6C 3E 00"
"EE FE 08"
"EE FE 08")

View file

@ -1,4 +0,0 @@
from passport.loggers import BaseLogger
class SilentLogger(BaseLogger):
"""print nothing"""

View file

@ -1,572 +0,0 @@
#!/usr/bin/env python3
# (c) 2018 by 4am
# MIT-licensed
import argparse
import binascii
import bitarray # https://pypi.org/project/bitarray/
import collections
import itertools
import os
__version__ = "0.3"
__date__ = "2018-07-23"
__progname__ = "wozardry"
__displayname__ = __progname__ + " " + __version__ + " by 4am (" + __date__ + ")"
# domain-specific constants defined in .woz specification
kWOZ1 = b"WOZ1"
kINFO = b"INFO"
kTMAP = b"TMAP"
kTRKS = b"TRKS"
kMETA = b"META"
kBitstreamLengthInBytes = 6646
kLanguages = ("English","Spanish","French","German","Chinese","Japanese","Italian","Dutch","Portugese","Danish","Finnish","Norwegian","Swedish","Russian","Polish","Turkish","Arabic","Thai","Czech","Hungarian","Catalan","Croatian","Greek","Hebrew","Romanian","Slovak","Ukranian","Indonesian","Malay","Vietnamese","Other")
kRequiresRAM = ("16K","24K","32K","48K","64K","128K","256K","512K","768K","1M","1.25M","1.5M+","Unknown")
kRequiresMachine = ("2","2+","2e","2c","2e+","2gs","2c+","3","3+")
# strings and things, for print routines and error messages
sEOF = "Unexpected EOF"
sBadChunkSize = "Bad chunk size"
dNoYes = {False:"no",True:"yes"}
tQuarters = (".00",".25",".50",".75")
# errors that may be raised
class WozError(Exception): pass # base class
class WozCRCError(WozError): pass
class WozFormatError(WozError): pass
class WozEOFError(WozFormatError): pass
class WozHeaderError(WozFormatError): pass
class WozHeaderError_NoWOZ1(WozHeaderError): pass
class WozHeaderError_NoFF(WozHeaderError): pass
class WozHeaderError_NoLF(WozHeaderError): pass
class WozINFOFormatError(WozFormatError): pass
class WozINFOFormatError_BadVersion(WozINFOFormatError): pass
class WozINFOFormatError_BadDiskType(WozINFOFormatError): pass
class WozINFOFormatError_BadWriteProtected(WozINFOFormatError): pass
class WozINFOFormatError_BadSynchronized(WozINFOFormatError): pass
class WozINFOFormatError_BadCleaned(WozINFOFormatError): pass
class WozINFOFormatError_BadCreator(WozINFOFormatError): pass
class WozTMAPFormatError(WozFormatError): pass
class WozTMAPFormatError_BadTRKS(WozTMAPFormatError): pass
class WozTRKSFormatError(WozFormatError): pass
class WozMETAFormatError(WozFormatError): pass
class WozMETAFormatError_DuplicateKey(WozFormatError): pass
class WozMETAFormatError_BadValue(WozFormatError): pass
class WozMETAFormatError_BadLanguage(WozFormatError): pass
class WozMETAFormatError_BadRAM(WozFormatError): pass
class WozMETAFormatError_BadMachine(WozFormatError): pass
def from_uint32(b):
return int.from_bytes(b, byteorder="little")
from_uint16=from_uint32
def to_uint32(b):
return b.to_bytes(4, byteorder="little")
def to_uint16(b):
return b.to_bytes(2, byteorder="little")
def to_uint8(b):
return b.to_bytes(1, byteorder="little")
def raise_if(cond, e, s=""):
if cond: raise e(s)
class Track:
def __init__(self, bits, bit_count, speed=None):
self.bits = bits
while len(self.bits) > bit_count:
self.bits.pop()
self.bit_count = bit_count
self.speed = speed
self.bit_index = 0
self.revolutions = 0
def bit(self):
b = self.bits[self.bit_index] and 1 or 0
self.bit_index += 1
if self.bit_index >= self.bit_count:
self.bit_index = 0
self.revolutions += 1
yield b
def nibble(self):
b = 0
while b == 0:
b = next(self.bit())
n = 0x80
for bit_index in range(6, -1, -1):
b = next(self.bit())
n += b << bit_index
yield n
def rewind(self, bit_count):
self.bit_index -= 1
if self.bit_index < 0:
self.bit_index = self.bit_count - 1
self.revolutions -= 1
def find(self, sequence):
starting_revolutions = self.revolutions
seen = [0] * len(sequence)
while (self.revolutions < starting_revolutions + 2):
del seen[0]
seen.append(next(self.nibble()))
if tuple(seen) == tuple(sequence): return True
return False
class WozTrack(Track):
def __init__(self, bits, bit_count, splice_point = 0xFFFF, splice_nibble = 0, splice_bit_count = 0):
Track.__init__(self, bits, bit_count)
self.splice_point = splice_point
self.splice_nibble = splice_nibble
self.splice_bit_count = splice_bit_count
class DiskImage: # base class
def __init__(self, filename=None, stream=None):
raise_if(not filename and not stream, WozError, "no input")
self.filename = filename
self.tracks = []
def seek(self, track_num):
"""returns Track object for the given track, or None if the track is not part of this disk image. track_num can be 0..40 in 0.25 increments (0, 0.25, 0.5, 0.75, 1, &c.)"""
return None
class WozValidator:
def validate_info_version(self, version):
raise_if(version != b'\x01', WozINFOFormatError_BadVersion, "Unknown version (expected 1, found %s)" % version)
def validate_info_disk_type(self, disk_type):
raise_if(disk_type not in (b'\x01',b'\x02'), WozINFOFormatError_BadDiskType, "Unknown disk type (expected 1 or 2, found %s)" % disk_type)
def validate_info_write_protected(self, write_protected):
raise_if(write_protected not in (b'\x00',b'\x01'), WozINFOFormatError_BadWriteProtected, "Unknown write protected flag (expected 0 or 1, found %s)" % write_protected)
def validate_info_synchronized(self, synchronized):
raise_if(synchronized not in (b'\x00',b'\x01'), WozINFOFormatError_BadSynchronized, "Unknown synchronized flag (expected 0, or 1, found %s)" % synchronized)
def validate_info_cleaned(self, cleaned):
raise_if(cleaned not in (b'\x00',b'\x01'), WozINFOFormatError_BadCleaned, "Unknown cleaned flag (expected 0 or 1, found %s)" % cleaned)
def validate_info_creator(self, creator_as_bytes):
raise_if(len(creator_as_bytes) > 32, WozINFOFormatError_BadCreator, "Creator is longer than 32 bytes")
try:
creator_as_bytes.decode("UTF-8")
except:
raise_if(True, WozINFOFormatError_BadCreator, "Creator is not valid UTF-8")
def encode_info_creator(self, creator_as_string):
creator_as_bytes = creator_as_string.encode("UTF-8").ljust(32, b" ")
self.validate_info_creator(creator_as_bytes)
return creator_as_bytes
def decode_info_creator(self, creator_as_bytes):
self.validate_info_creator(creator_as_bytes)
return creator_as_bytes.decode("UTF-8").strip()
def validate_metadata(self, metadata_as_bytes):
try:
metadata = metadata_as_bytes.decode("UTF-8")
except:
raise WozMETAFormatError("Metadata is not valid UTF-8")
def decode_metadata(self, metadata_as_bytes):
self.validate_metadata(metadata_as_bytes)
return metadata_as_bytes.decode("UTF-8")
def validate_metadata_value(self, value):
raise_if("\t" in value, WozMETAFormatError_BadValue, "Invalid metadata value (contains tab character)")
raise_if("\n" in value, WozMETAFormatError_BadValue, "Invalid metadata value (contains linefeed character)")
raise_if("|" in value, WozMETAFormatError_BadValue, "Invalid metadata value (contains pipe character)")
def validate_metadata_language(self, language):
raise_if(language and (language not in kLanguages), WozMETAFormatError_BadLanguage, "Invalid metadata language")
def validate_metadata_requires_ram(self, requires_ram):
raise_if(requires_ram and (requires_ram not in kRequiresRAM), WozMETAFormatError_BadRAM, "Invalid metadata requires_ram")
def validate_metadata_requires_machine(self, requires_machine):
raise_if(requires_machine and (requires_machine not in kRequiresMachine), WozMETAFormatError_BadMachine, "Invalid metadata requires_machine")
class WozReader(DiskImage, WozValidator):
def __init__(self, filename=None, stream=None):
DiskImage.__init__(self, filename, stream)
self.tmap = None
self.info = collections.OrderedDict()
self.meta = collections.OrderedDict()
with stream or open(filename, "rb") as f:
header_raw = f.read(8)
raise_if(len(header_raw) != 8, WozEOFError, sEOF)
self.__process_header(header_raw)
crc_raw = f.read(4)
raise_if(len(crc_raw) != 4, WozEOFError, sEOF)
crc = from_uint32(crc_raw)
all_data = []
while True:
chunk_id = f.read(4)
if not chunk_id: break
raise_if(len(chunk_id) != 4, WozEOFError, sEOF)
all_data.append(chunk_id)
chunk_size_raw = f.read(4)
raise_if(len(chunk_size_raw) != 4, WozEOFError, sEOF)
all_data.append(chunk_size_raw)
chunk_size = from_uint32(chunk_size_raw)
data = f.read(chunk_size)
raise_if(len(data) != chunk_size, WozEOFError, sEOF)
all_data.append(data)
if chunk_id == kINFO:
raise_if(chunk_size != 60, WozINFOFormatError, sBadChunkSize)
self.__process_info(data)
elif chunk_id == kTMAP:
raise_if(chunk_size != 160, WozTMAPFormatError, sBadChunkSize)
self.__process_tmap(data)
elif chunk_id == kTRKS:
self.__process_trks(data)
elif chunk_id == kMETA:
self.__process_meta(data)
if crc:
raise_if(crc != binascii.crc32(b"".join(all_data)) & 0xffffffff, WozCRCError, "Bad CRC")
def __process_header(self, data):
raise_if(data[:4] != kWOZ1, WozHeaderError_NoWOZ1, "Magic string 'WOZ1' not present at offset 0")
raise_if(data[4] != 0xFF, WozHeaderError_NoFF, "Magic byte 0xFF not present at offset 4")
raise_if(data[5:8] != b"\x0A\x0D\x0A", WozHeaderError_NoLF, "Magic bytes 0x0A0D0A not present at offset 5")
def __process_info(self, data):
version = data[0]
self.validate_info_version(to_uint8(version))
disk_type = data[1]
self.validate_info_disk_type(to_uint8(disk_type))
write_protected = data[2]
self.validate_info_write_protected(to_uint8(write_protected))
synchronized = data[3]
self.validate_info_synchronized(to_uint8(synchronized))
cleaned = data[4]
self.validate_info_cleaned(to_uint8(cleaned))
creator = self.decode_info_creator(data[5:37])
self.info["version"] = version # int
self.info["disk_type"] = disk_type # int
self.info["write_protected"] = (write_protected == 1) # boolean
self.info["synchronized"] = (synchronized == 1) # boolean
self.info["cleaned"] = (cleaned == 1) # boolean
self.info["creator"] = creator # string
def __process_tmap(self, data):
self.tmap = list(data)
def __process_trks(self, data):
i = 0
while i < len(data):
raw_bytes = data[i:i+kBitstreamLengthInBytes]
raise_if(len(raw_bytes) != kBitstreamLengthInBytes, WozEOFError, sEOF)
i += kBitstreamLengthInBytes
bytes_used_raw = data[i:i+2]
raise_if(len(bytes_used_raw) != 2, WozEOFError, sEOF)
bytes_used = from_uint16(bytes_used_raw)
raise_if(bytes_used > kBitstreamLengthInBytes, WozTRKSFormatError, "TRKS chunk %d bytes_used is out of range" % len(self.tracks))
i += 2
bit_count_raw = data[i:i+2]
raise_if(len(bit_count_raw) != 2, WozEOFError, sEOF)
bit_count = from_uint16(bit_count_raw)
i += 2
splice_point_raw = data[i:i+2]
raise_if(len(splice_point_raw) != 2, WozEOFError, sEOF)
splice_point = from_uint16(splice_point_raw)
if splice_point != 0xFFFF:
raise_if(splice_point > bit_count, WozTRKSFormatError, "TRKS chunk %d splice_point is out of range" % len(self.tracks))
i += 2
splice_nibble = data[i]
i += 1
splice_bit_count = data[i]
if splice_point != 0xFFFF:
raise_if(splice_bit_count not in (8,9,10), WozTRKSFormatError, "TRKS chunk %d splice_bit_count is out of range" % len(self.tracks))
i += 3
bits = bitarray.bitarray(endian="big")
bits.frombytes(raw_bytes)
self.tracks.append(WozTrack(bits, bit_count, splice_point, splice_nibble, splice_bit_count))
for trk, i in zip(self.tmap, itertools.count()):
raise_if(trk != 0xFF and trk >= len(self.tracks), WozTMAPFormatError_BadTRKS, "Invalid TMAP entry: track %d%s points to non-existent TRKS chunk %d" % (i/4, tQuarters[i%4], trk))
def __process_meta(self, metadata_as_bytes):
metadata = self.decode_metadata(metadata_as_bytes)
for line in metadata.split("\n"):
if not line: continue
columns_raw = line.split("\t")
raise_if(len(columns_raw) != 2, WozMETAFormatError, "Malformed metadata")
key, value_raw = columns_raw
raise_if(key in self.meta, WozMETAFormatError_DuplicateKey, "Duplicate metadata key %s" % key)
values = value_raw.split("|")
if key == "language":
list(map(self.validate_metadata_language, values))
elif key == "requires_ram":
list(map(self.validate_metadata_requires_ram, values))
elif key == "requires_machine":
list(map(self.validate_metadata_requires_machine, values))
self.meta[key] = len(values) == 1 and values[0] or tuple(values)
def seek(self, track_num):
"""returns Track object for the given track, or None if the track is not part of this disk image. track_num can be 0..40 in 0.25 increments (0, 0.25, 0.5, 0.75, 1, &c.)"""
if type(track_num) != float:
track_num = float(track_num)
if track_num < 0.0 or \
track_num > 40.0 or \
track_num.as_integer_ratio()[1] not in (1,2,4):
raise WozError("Invalid track %s" % track_num)
trk_id = self.tmap[int(track_num * 4)]
if trk_id == 0xFF: return None
return self.tracks[trk_id]
class WozWriter(WozValidator):
def __init__(self, creator):
self.info = collections.OrderedDict()
self.info["version"] = 1
self.info["disk_type"] = 1
self.info["write_protected"] = False
self.info["synchronized"] = False
self.info["cleaned"] = False
self.info["creator"] = creator
self.tracks = []
self.tmap = [0xFF]*160
self.meta = collections.OrderedDict()
def add(self, half_phase, track):
trk_id = len(self.tracks)
self.tracks.append(track)
self.tmap[half_phase] = trk_id
# if half_phase:
# self.tmap[half_phase - 1] = trk_id
# if half_phase < 159:
# self.tmap[half_phase + 1] = trk_id
def add_track(self, track_num, track):
self.add(int(track_num * 4), track)
def build_info(self):
chunk = bytearray()
chunk.extend(kINFO) # chunk ID
chunk.extend(to_uint32(60)) # chunk size (constant)
version_raw = to_uint8(self.info["version"])
self.validate_info_version(version_raw)
disk_type_raw = to_uint8(self.info["disk_type"])
self.validate_info_disk_type(disk_type_raw)
write_protected_raw = to_uint8(self.info["write_protected"])
self.validate_info_write_protected(write_protected_raw)
synchronized_raw = to_uint8(self.info["synchronized"])
self.validate_info_synchronized(synchronized_raw)
cleaned_raw = to_uint8(self.info["cleaned"])
self.validate_info_cleaned(cleaned_raw)
creator_raw = self.encode_info_creator(self.info["creator"])
chunk.extend(version_raw) # version (int, probably 1)
chunk.extend(disk_type_raw) # disk type (1=5.25 inch, 2=3.5 inch)
chunk.extend(write_protected_raw) # write-protected (0=no, 1=yes)
chunk.extend(synchronized_raw) # tracks synchronized (0=no, 1=yes)
chunk.extend(cleaned_raw) # weakbits cleaned (0=no, 1=yes)
chunk.extend(creator_raw) # creator
chunk.extend(b"\x00" * 23) # reserved
return chunk
def build_tmap(self):
chunk = bytearray()
chunk.extend(kTMAP) # chunk ID
chunk.extend(to_uint32(160)) # chunk size
chunk.extend(bytes(self.tmap))
return chunk
def build_trks(self):
chunk = bytearray()
chunk.extend(kTRKS) # chunk ID
chunk_size = len(self.tracks)*6656
chunk.extend(to_uint32(chunk_size)) # chunk size
for track in self.tracks:
raw_bytes = track.bits.tobytes()
chunk.extend(raw_bytes) # bitstream as raw bytes
chunk.extend(b"\x00" * (6646 - len(raw_bytes))) # padding to 6646 bytes
chunk.extend(to_uint16(len(raw_bytes))) # bytes used
chunk.extend(to_uint16(track.bit_count)) # bit count
chunk.extend(b"\xFF\xFF") # splice point (none)
chunk.extend(b"\xFF") # splice nibble (none)
chunk.extend(b"\xFF") # splice bit count (none)
chunk.extend(b"\x00\x00") # reserved
return chunk
def build_meta(self):
if not self.meta: return b""
for key, value_raw in self.meta.items():
if type(value_raw) == str:
values = [value_raw]
else:
values = value_raw
list(map(self.validate_metadata_value, values))
if key == "language":
list(map(self.validate_metadata_language, values))
elif key == "requires_ram":
list(map(self.validate_metadata_requires_ram, values))
elif key == "requires_machine":
list(map(self.validate_metadata_requires_machine, values))
data = b"\x0A".join(
[k.encode("UTF-8") + \
b"\x09" + \
(type(v) in (list,tuple) and "|".join(v) or v).encode("UTF-8") \
for k, v in self.meta.items()])
chunk = bytearray()
chunk.extend(kMETA) # chunk ID
chunk.extend(to_uint32(len(data))) # chunk size
chunk.extend(data)
return chunk
def build_head(self, crc):
chunk = bytearray()
chunk.extend(kWOZ1) # magic bytes
chunk.extend(b"\xFF\x0A\x0D\x0A") # more magic bytes
chunk.extend(to_uint32(crc)) # CRC32 of rest of file (calculated in caller)
return chunk
def write(self, stream):
info = self.build_info()
tmap = self.build_tmap()
trks = self.build_trks()
meta = self.build_meta()
crc = binascii.crc32(info + tmap + trks + meta)
head = self.build_head(crc)
stream.write(head)
stream.write(info)
stream.write(tmap)
stream.write(trks)
stream.write(meta)
#---------- command line interface ----------
class BaseCommand:
def __init__(self, name):
self.name = name
def setup(self, subparser, description=None, epilog=None, help=".woz disk image", formatter_class=argparse.HelpFormatter):
self.parser = subparser.add_parser(self.name, description=description, epilog=epilog, formatter_class=formatter_class)
self.parser.add_argument("file", help=help)
self.parser.set_defaults(action=self)
def __call__(self, args):
self.woz_image = WozReader(args.file)
class CommandVerify(BaseCommand):
def __init__(self):
BaseCommand.__init__(self, "verify")
def setup(self, subparser):
BaseCommand.setup(self, subparser,
description="Verify file structure and metadata of a .woz disk image (produces no output unless a problem is found)")
class CommandDump(BaseCommand):
kWidth = 30
def __init__(self):
BaseCommand.__init__(self, "dump")
def setup(self, subparser):
BaseCommand.setup(self, subparser,
description="Print all available information and metadata in a .woz disk image")
def __call__(self, args):
BaseCommand.__call__(self, args)
self.print_tmap()
self.print_meta()
self.print_info()
def print_info(self):
print("INFO: File format version:".ljust(self.kWidth), "%d" % self.woz_image.info["version"])
print("INFO: Disk type:".ljust(self.kWidth), ("5.25-inch", "3.5-inch")[self.woz_image.info["disk_type"]-1])
print("INFO: Write protected:".ljust(self.kWidth), dNoYes[self.woz_image.info["write_protected"]])
print("INFO: Track synchronized:".ljust(self.kWidth), dNoYes[self.woz_image.info["synchronized"]])
print("INFO: Weakbits cleaned:".ljust(self.kWidth), dNoYes[self.woz_image.info["cleaned"]])
print("INFO: Creator:".ljust(self.kWidth), self.woz_image.info["creator"])
def print_tmap(self):
i = 0
for trk, i in zip(self.woz_image.tmap, itertools.count()):
if trk != 0xFF:
print(("TMAP: Track %d%s" % (i/4, tQuarters[i%4])).ljust(self.kWidth), "TRKS %d" % (trk))
def print_meta(self):
if not self.woz_image.meta: return
for key, values in self.woz_image.meta.items():
if type(values) == str:
values = [values]
print(("META: " + key + ":").ljust(self.kWidth), values[0])
for value in values[1:]:
print("META: ".ljust(self.kWidth), value)
class CommandEdit(BaseCommand):
def __init__(self):
BaseCommand.__init__(self, "edit")
def setup(self, subparser):
BaseCommand.setup(self,
subparser,
description="Edit information and metadata in a .woz disk image",
epilog="""Tips:
- Use repeated flags to edit multiple fields at once.
- Use "key:" with no value to delete a metadata field.
- Keys are case-sensitive.
- Some values have format restrictions; read the .woz specification.""",
help=".woz disk image (modified in place)",
formatter_class=argparse.RawDescriptionHelpFormatter)
self.parser.add_argument("-i", "--info", type=str, action="append",
help="""change information field.
INFO format is "key:value".
Acceptable keys are disk_type, write_protected, synchronized, cleaned, creator, version.
Other keys are ignored.
For boolean fields, use "1" or "true" or "yes" for true, "0" or "false" or "no" for false.""")
self.parser.add_argument("-m", "--meta", type=str, action="append",
help="""change metadata field.
META format is "key:value".
Standard keys are title, subtitle, publisher, developer, copyright, version, language, requires_ram,
requires_machine, notes, side, side_name, contributor, image_date. Other keys are allowed.""")
def __call__(self, args):
BaseCommand.__call__(self, args)
# maintain creator if there is one, otherwise use default
output = WozWriter(self.woz_image.info.get("creator", __displayname__))
output.tmap = self.woz_image.tmap
output.tracks = self.woz_image.tracks
output.info = self.woz_image.info.copy()
output.meta = self.woz_image.meta.copy()
# add all new info fields
for i in args.info or ():
k, v = i.split(":", 1)
if k in ("write_protected","synchronized","cleaned"):
v = v.lower() in ("1", "true", "yes")
output.info[k] = v
# add all new metadata fields
for m in args.meta or ():
k, v = m.split(":", 1)
v = v.split("|")
if len(v) == 1:
v = v[0]
if v:
output.meta[k] = v
elif k in output.meta.keys():
del output.meta[k]
tmpfile = args.file + ".ardry"
with open(tmpfile, "wb") as f:
output.write(f)
os.rename(tmpfile, args.file)
if __name__ == "__main__":
import sys
raise_if = lambda cond, e, s="": cond and sys.exit("%s: %s" % (e.__name__, s))
cmds = [CommandDump(), CommandVerify(), CommandEdit()]
parser = argparse.ArgumentParser(prog=__progname__,
description="""A multi-purpose tool for manipulating .woz disk images.
See '""" + __progname__ + """ <command> -h' for help on individual commands.""",
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument("-v", "--version", action="version", version=__displayname__)
sp = parser.add_subparsers(dest="command", help="command")
for command in cmds:
command.setup(sp)
args = parser.parse_args()
args.action(args)

View file

@ -1,42 +0,0 @@
class Patch:
# represents a single patch that could be applied to a disk image
def __init__(self, track_num, sector_num, byte_offset, new_value, id=None, params={}):
self.track_num = track_num
self.sector_num = sector_num
self.byte_offset = byte_offset
self.new_value = new_value # (can be 0-length bytearray if this "patch" is really just an informational message with no changes)
self.id = id # for logger.PrintByID (can be None)
self.params = params.copy()
self.params["track"] = track_num
self.params["sector"] = sector_num
self.params["offset"] = byte_offset
class Patcher: # base class
def __init__(self, g):
self.g = g
def should_run(self, track_num):
"""returns True if this patcher applies to the given track in the current process (possibly affected by state in self.g), or False otherwise"""
return False
def run(self, logical_sectors, track_num):
"""returns list of Patch objects representing patches that could be applied to logical_sectors"""
return []
from .a5count import *
from .a6bc95 import *
from .advint import *
from .bademu import *
from .bademu2 import *
from .bbf9 import *
from .bootcounter import *
from .border import *
from .d5d5f7 import *
from .mecc1 import *
from .mecc2 import *
from .mecc3 import *
from .mecc4 import *
from .microfun import *
from .rwts import *
from .sunburst import *
from .universale7 import *

View file

@ -1,22 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class A5CountPatcher(Patcher):
"""nibble count between $A5 and address prologue
tested on
- Game Frame One
- Game Frame Two
"""
def should_run(self, track_num):
return self.g.is_pascal
def run(self, logical_sectors, track_num):
offset = find.wild(concat_track(logical_sectors),
b'\x07'
b'\xE6\x02'
b'\xD0\x03'
b'\x4C\xA5\x00'
b'\xC9\xA5')
if offset == -1: return []
return [Patch(track_num, offset // 256, 8 + (offset % 256), b'\xD0\x7B', "a5count")]

View file

@ -1,36 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class A6BC95Patcher(Patcher):
"""nibble count after $A6 $BC $95 prologue
tested on
- The Secrets of Science Island
"""
def should_run(self, track_num):
return self.g.is_pascal
def run(self, logical_sectors, track_num):
buffy = concat_track(logical_sectors)
if -1 == find.wild(buffy,
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9\xA6'
b'\xD0\xED'):
return False
if -1 == find.wild(buffy,
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9\xBC'):
return False
if -1 == find.wild(buffy,
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9\x95'):
return False
offset = find.wild(buffy,
b'\xAE\xF8\x01'
b'\xA9\x0A'
b'\x8D\xFE\x01')
if offset == -1: return []
return [Patch(track_num, offset // 256, offset % 256, b'\x60', "a6bc95")]

View file

@ -1,24 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class AdventureInternationalPatcher(Patcher):
"""encrypted protection check on Adventure International disks
tested on
- SAGA1 - Adventureland v2.1-416
- SAGA2 - Pirate Adventure v2.1-408
- SAGA5 - The Count v2.1-115
- SAGA6 - Strange Odyssey v2.1-119
"""
def should_run(self, track_num):
return True # TODO self.g.is_adventure_international
def run(self, logical_sectors, track_num):
buffy = concat_track(logical_sectors)
offset = find.wild(buffy,
b'\x85' + find.WILDCARD + find.WILDCARD + \
b'\x74\x45\x09'
b'\xD9\x32'
b'\x0C\x30')
if offset == -1: return []
return [Patch(track_num, offset // 256, offset % 256, b'\xD1\x59\xA7', "advint")]

View file

@ -1,24 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class BadEmuPatcher(Patcher):
"""RWTS checks for timing bit by checking if data latch is still $D5 after waiting "too long" but this confuses legacy emulators (AppleWin, older versions of MAME) so we patch it for compatibility
tested on
- Dino Dig
- Make A Face
"""
def should_run(self, track_num):
return self.g.is_rwts and (track_num == 0)
def run(self, logical_sectors, track_num):
if not find.at(0x4F, logical_sectors[3],
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9\xD5'
b'\xD0\xF0'
b'\xEA'
b'\xBD\x8C\xC0'
b'\xC9\xD5'
b'\xF0\x12'): return []
return [Patch(0, 3, 0x58, b'\xF0\x06')]

View file

@ -1,25 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class BadEmu2Patcher(Patcher):
"""RWTS checks for timing bit by checking if data latch is still $D5 after waiting "too long" but this confuses legacy emulators (AppleWin, older versions of MAME) so we patch it for compatibility
tested on
- Dino Dig
- Make A Face
"""
def should_run(self, track_num):
return self.g.is_rwts and (track_num == 0)
def run(self, logical_sectors, track_num):
if not find.at(0x4F, logical_sectors[3],
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\x4A'
b'\xC9\x6A'
b'\xD0\xEF'
b'\xBD\x8C\xC0'
b'\xC9\xD5'
b'\xF0\x12'): return []
return [Patch(0, 3, 0x59, b'\xF0\x05')]
return patches

View file

@ -1,32 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class BBF9Patcher(Patcher):
"""patch nibble check seen in Sunburst disks 1988 and later
see write-up of 4am crack no. 1165 Muppet Slate
tested on
- Muppet Slate (1988)
- Memory Building Blocks (1989)
- Odd One Out (1989)
- Regrouping (1989)
- Simon Says (1989)
- Teddy and Iggy (1990)
- 1-2-3 Sequence Me (1991)
"""
def should_run(self, track_num):
return self.g.is_prodos
def run(self, logical_sectors, track_num):
buffy = concat_track(logical_sectors)
if -1 == find.wild(buffy,
b'\x8E\xC0'
b'\x18'
b'\xA5' + find.WILDCARD + \
b'\x69\x8C'
b'\x8D'): return []
offset = find.wild(buffy,
b'\xBD\x89\xC0')
if offset == -1: return []
return [Patch(track_num, offset // 256, offset % 256, b'\x18\x60', "bbf9")]

View file

@ -1,25 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class BootCounterPatcher(Patcher):
"""MECC "limited backup" disks contain code to self-destruct after a certain number of boots"""
def should_run(self, track_num):
return track_num == 1
def run(self, logical_sectors, track_num):
if not find.wild_at(0x00, logical_sectors[0],
b'\xAD\xF3\x03'
b'\x8D\xF4\x03'
b'\x20\x2F\xFB'
b'\x20\x93\xFE'
b'\x20\x89\xFE'
b'\x20\x58\xFC'
b'\xA9\x0A'
b'\x85\x25'
b'\x2C' + find.WILDCARD + find.WILDCARD + \
b'\xCE\x17\x18'
b'\xD0\x05'
b'\xA9\x80'
b'\x8D\x18\x18'): return []
return [Patch(1, 0, 0x00, b'\x4C\x03\x1B', "bootcounter")]
return patches

View file

@ -1,22 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class BorderPatcher(Patcher):
"""RWTS changes prologue and epilogue sequences with an RWTS swapper at $BE5A
tested on
- Arena
- Early Bird
"""
def should_run(self, track_num):
return self.g.is_boot0 and self.g.is_boot1 and track_num == 0
def run(self, logical_sectors, track_num):
if not find.at(0x5A, logical_sectors[8],
b'\xC9\x23'
b'\xB0\xEB'
b'\x0A'
b'\x20\x6C\xBF'
b'\xEA'
b'\xEA'): return []
return [Patch(0, 8, 0x5A, b'\x48\xA0\x01\xB1\x3C\x6A\x68\x90\x08\x0A', "border")]

View file

@ -1,42 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class D5D5F7Patcher(Patcher):
"""nibble count with weird bitstream involving $D5 and $F7 as delimiters
tested on
- Ace Detective
- Cat 'n Mouse
- Cotton Tales
- Dyno-Quest
- Easy Street
- Fraction-oids
- Math Magic
- RoboMath
- NoteCard Maker
"""
def should_run(self, track_num):
# TODO
return True
def run(self, logical_sectors, track_num):
offset = find.wild(concat_track(logical_sectors),
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\x48'
b'\x68'
b'\xC9\xD5'
b'\xD0\xF5'
b'\xA0\x00' + \
b'\x8C' + find.WILDCARD + find.WILDCARD + \
b'\xBD\x8C\xC0'
b'\x10\xFB'
b'\xC9\xD5'
b'\xF0\x0F'
b'\xC9\xF7'
b'\xD0\x01'
b'\xC8'
b'\x18'
b'\x6D')
if offset == -1: return []
return [Patch(track_num, offset // 256, offset % 256, b'\x60', "d5d5f7")]

View file

@ -1,23 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class MECC1Patcher(Patcher):
"""MECC fastloader variant 1
tested on
- A-153 Word Munchers 1.4
"""
def should_run(self, track_num):
return self.g.mecc_variant == 1 and track_num == 0
def run(self, logical_sectors, track_num):
patches = []
for a, x, v in ((0x0B, 0x08, b'\xD5'),
(0x0B, 0x12, b'\xAA'),
(0x0B, 0x1D, b'\x96'),
(0x0B, 0x8F, b'\xD5'),
(0x0B, 0x99, b'\xAA'),
(0x0B, 0xA3, b'\xAD')):
if logical_sectors[a][x] != v[0]:
patches.append(Patch(0, a, x, v))
return patches

View file

@ -1,25 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class MECC2Patcher(Patcher):
"""MECC fastloader variant 2
tested on
- A-175 Phonics Prime Time - Initial Consonants 1.0
- A-176 Phonics Prime Time - Final Consonants 1.0
- A-179 Phonics Prime Time - Blends and Digraphs 1.0
"""
def should_run(self, track_num):
return self.g.mecc_variant == 2 and track_num == 0
def run(self, logical_sectors, track_num):
patches = []
for a, x, v in ((7, 0x83, b'\xD5'),
(7, 0x8D, b'\xAA'),
(7, 0x98, b'\x96'),
(7, 0x15, b'\xD5'),
(7, 0x1F, b'\xAA'),
(7, 0x2A, b'\xAD')):
if logical_sectors[a][x] != v[0]:
patches.append(Patch(0, a, x, v))
return patches

View file

@ -1,23 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class MECC3Patcher(Patcher):
"""MECC fastloader variant 3
tested on
- A-153 Word Munchers 1.1
"""
def should_run(self, track_num):
return self.g.mecc_variant == 3 and track_num == 0
def run(self, logical_sectors, track_num):
patches = []
for a, x, v in ((0x0A, 0xE8, b'\xD5'),
(0x0A, 0xF2, b'\xAA'),
(0x0A, 0xFD, b'\x96'),
(0x0B, 0x6F, b'\xD5'),
(0x0B, 0x79, b'\xAA'),
(0x0B, 0x83, b'\xAD')):
if logical_sectors[a][x] != v[0]:
patches.append(Patch(0, a, x, v))
return patches

View file

@ -1,22 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class MECC4Patcher(Patcher):
"""MECC fastloader variant 4
tested on
"""
def should_run(self, track_num):
return self.g.mecc_variant == 4 and track_num == 0
def run(self, logical_sectors, track_num):
patches = []
for a, x, v in ((8, 0x83, b'\xD5'),
(8, 0x8D, b'\xAA'),
(8, 0x98, b'\x96'),
(8, 0x15, b'\xD5'),
(8, 0x1F, b'\xAA'),
(8, 0x2A, b'\xAD')):
if logical_sectors[a][x] != v[0]:
patches.append(Patch(0, a, x, v))
return patches

View file

@ -1,21 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class MicrofunPatcher(Patcher):
"""RWTS jumps to nibble check after reading certain sectors
tested on
- Station 5
- The Heist
- Miner 2049er (re-release)
- Miner 2049er II
- Short Circuit
"""
def should_run(self, track_num):
return self.g.is_rwts and (track_num == 0)
def run(self, logical_sectors, track_num):
offset = find.wild(concat_track(logical_sectors),
b'\xA0\x00\x84\x26\x84\x27\xBD\x8C\xC0')
if offset == -1: return []
return [Patch(track_num, offset // 256, offset % 256, b'\x18\x60', "microfun")]

View file

@ -1,69 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class RWTSPatcher(Patcher):
"""RWTS fixups for DOS 3.3-shaped RWTSen"""
def should_run(self, track_num):
return self.g.is_rwts and (track_num == 0)
def run(self, logical_sectors, track_num):
patches = []
lda_bpl = b'\xBD\x8C\xC0\x10\xFB'
lda_bpl_cmp = lda_bpl + b'\xC9' + find.WILDCARD
lda_bpl_eor = lda_bpl + b'\x49' + find.WILDCARD
lda_jsr = b'\xA9' + find.WILDCARD + b'\x20'
lda_jsr_d5 = lda_jsr + b'\xD5'
lda_jsr_b8 = lda_jsr + b'\xB8'
for a, b, c, d, e in (
# address prologue byte 1 (read)
(0x55, 3, b'\xD5', 0x4F, lda_bpl_cmp + b'\xD0\xF0\xEA'),
# address prologue byte 2 (read)
(0x5F, 3, b'\xAA', 0x59, lda_bpl_cmp + b'\xD0\xF2\xA0\x03'),
# address prologue byte 3 (read)
(0x6A, 3, b'\x96', 0x64, lda_bpl_cmp + b'\xD0\xE7'),
# address epilogue byte 1 (read)
(0x91, 3, b'\xDE', 0x8B, lda_bpl_cmp + b'\xD0\xAE'),
# address epilogue byte 2 (read)
(0x9B, 3, b'\xAA', 0x95, lda_bpl_cmp + b'\xD0\xA4\x18'),
# data prologue byte 1 (read)
(0xE7, 2, b'\xD5', 0xE1, lda_bpl_eor + b'\xD0\xF4\xEA'),
# data prologue byte 2 (read)
(0xF1, 2, b'\xAA', 0xEB, lda_bpl_cmp + b'\xD0\xF2\xA0\x56'),
# data prologue byte 3 (read)
(0xFC, 2, b'\xAD', 0xF6, lda_bpl_cmp + b'\xD0\xE7'),
# data epilogue byte 1 (read)
(0x35, 3, b'\xDE', 0x2F, lda_bpl_cmp + b'\xD0\x0A\xEA'),
# data epilogue byte 2 (read)
(0x3F, 3, b'\xAA', 0x39, lda_bpl_cmp + b'\xF0\x5C\x38'),
# address prologue byte 1 (write)
(0x7A, 6, b'\xD5', 0x79, lda_jsr_d5),
# address prologue byte 2 (write)
(0x7F, 6, b'\xAA', 0x7E, lda_jsr_d5),
# address prologue byte 3 (write)
(0x84, 6, b'\x96', 0x83, lda_jsr_d5),
# address epilogue byte 1 (write)
(0xAE, 6, b'\xDE', 0xAD, lda_jsr_d5),
# address epilogue byte 2 (write)
(0xB3, 6, b'\xAA', 0xB2, lda_jsr_d5),
# address epilogue byte 3 (write)
(0xB8, 6, b'\xEB', 0xB7, lda_jsr_d5),
# data prologue byte 1 (write)
(0x53, 2, b'\xD5', 0x52, lda_jsr_b8),
# data prologue byte 2 (write)
(0x58, 2, b'\xAA', 0x57, lda_jsr_b8),
# data prologue byte 3 (write)
(0x5D, 2, b'\xAD', 0x5C, lda_jsr_b8),
# data epilogue byte 1 (write)
(0x9E, 2, b'\xDE', 0x9D, lda_jsr_b8),
# data epilogue byte 2 (write)
(0xA3, 2, b'\xAA', 0xA2, lda_jsr_b8),
# data epilogue byte 3 (write)
(0xA8, 2, b'\xEB', 0xA7, lda_jsr_b8),
# data epilogue byte 4 (write)
# needed by some Sunburst disks
(0xAD, 2, b'\xFF', 0xAC, lda_jsr_b8),
):
if not find.at(a, logical_sectors[b], c) and \
find.wild_at(d, logical_sectors[b], e):
patches.append(Patch(0, b, a, c))
return patches

View file

@ -1,54 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class SunburstPatcher(Patcher):
"""RWTS with track-based address and data prologue modifications
tested on
- Challenge Math
- Safari Search
- Ten Clues
- The Factory
- Trading Post
- Word Quest
"""
def should_run(self, track_num):
return self.g.is_rwts and (track_num == 0)
def run(self, logical_sectors, track_num):
if not find.at(0x40, logical_sectors[3], b'\xD0'): return []
if not find.at(0x9C, logical_sectors[3], b'\xF0'): return []
if not find.at(0x69, logical_sectors[4], bytes.fromhex(
"48"
"A5 2A"
"4A"
"A8"
"B9 29 BA"
"8D 6A B9"
"8D 84 BC"
"B9 34 BA"
"8D FC B8"
"8D 5D B8"
"C0 11"
"D0 03"
"A9 02"
"AC"
"A9 0E"
"8D C0 BF"
"68"
"69 00"
"48"
"AD 78 04"
"90 2B")): return []
if not find.at(0x69, logical_sectors[6], bytes.fromhex(
"4C B8 B6"
"EA"
"EA"
"EA")): return []
if not find.at(0x8C, logical_sectors[8], bytes.fromhex(
"69 BA")): return []
return [Patch(0, 3, 0x40, bytes.fromhex("F0")),
Patch(0, 3, 0x9C, bytes.fromhex("D0")),
Patch(0, 6, 0x69, bytes.fromhex("20 C3 BC 20 C3 BC")),
Patch(0, 8, 0x8C, bytes.fromhex("A0 B9")),
Patch(0, 4, 0xC0, bytes.fromhex("C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 CA"))]

View file

@ -1,19 +0,0 @@
from passport.patchers import Patch, Patcher
from passport.util import *
class UniversalE7Patcher(Patcher):
"""replace remnants of E7 bitstream with a compatible BYTEstream that fools most E7 protection checks
(invented by qkumba, see PoC||GTFO 0x11 and 4am crack no. 655 Rocky's Boots 4.0 for explanation)
"""
e7sector = b'\x00'*0xA0 + b'\xAC\x00'*0x30
def should_run(self, track_num):
return True
def run(self, logical_sectors, track_num):
patches = []
for sector_num in logical_sectors:
if find.at(0x00, logical_sectors[sector_num], self.e7sector):
patches.append(Patch(track_num, sector_num, 0xA3, b'\x64\xB4\x44\x80\x2C\xDC\x18\xB4\x44\x80\x44\xB4', "e7"))
return patches

View file

@ -1,220 +0,0 @@
from collections import OrderedDict
from passport.util import *
class AddressField:
def __init__(self, volume, track_num, sector_num, checksum):
self.volume = volume
self.track_num = track_num
self.sector_num = sector_num
self.checksum = checksum
self.valid = (volume ^ track_num ^ sector_num ^ checksum) == 0
class Sector:
def __init__(self, address_field, decoded, start_bit_index=None, end_bit_index=None):
self.address_field = address_field
self.decoded = decoded
self.start_bit_index = start_bit_index
self.end_bit_index = end_bit_index
def __getitem__(self, i):
return self.decoded[i]
class RWTS:
kDefaultSectorOrder16 = (0x00, 0x07, 0x0E, 0x06, 0x0D, 0x05, 0x0C, 0x04, 0x0B, 0x03, 0x0A, 0x02, 0x09, 0x01, 0x08, 0x0F)
kDefaultAddressPrologue16 = (0xD5, 0xAA, 0x96)
kDefaultAddressEpilogue16 = (0xDE, 0xAA)
kDefaultDataPrologue16 = (0xD5, 0xAA, 0xAD)
kDefaultDataEpilogue16 = (0xDE, 0xAA)
kDefaultNibbleTranslationTable16 = {
0x96: 0x00, 0x97: 0x01, 0x9a: 0x02, 0x9b: 0x03, 0x9d: 0x04, 0x9e: 0x05, 0x9f: 0x06, 0xa6: 0x07,
0xa7: 0x08, 0xab: 0x09, 0xac: 0x0a, 0xad: 0x0b, 0xae: 0x0c, 0xaf: 0x0d, 0xb2: 0x0e, 0xb3: 0x0f,
0xb4: 0x10, 0xb5: 0x11, 0xb6: 0x12, 0xb7: 0x13, 0xb9: 0x14, 0xba: 0x15, 0xbb: 0x16, 0xbc: 0x17,
0xbd: 0x18, 0xbe: 0x19, 0xbf: 0x1a, 0xcb: 0x1b, 0xcd: 0x1c, 0xce: 0x1d, 0xcf: 0x1e, 0xd3: 0x1f,
0xd6: 0x20, 0xd7: 0x21, 0xd9: 0x22, 0xda: 0x23, 0xdb: 0x24, 0xdc: 0x25, 0xdd: 0x26, 0xde: 0x27,
0xdf: 0x28, 0xe5: 0x29, 0xe6: 0x2a, 0xe7: 0x2b, 0xe9: 0x2c, 0xea: 0x2d, 0xeb: 0x2e, 0xec: 0x2f,
0xed: 0x30, 0xee: 0x31, 0xef: 0x32, 0xf2: 0x33, 0xf3: 0x34, 0xf4: 0x35, 0xf5: 0x36, 0xf6: 0x37,
0xf7: 0x38, 0xf9: 0x39, 0xfa: 0x3a, 0xfb: 0x3b, 0xfc: 0x3c, 0xfd: 0x3d, 0xfe: 0x3e, 0xff: 0x3f,
}
def __init__(self,
g,
sectors_per_track = 16,
address_prologue = kDefaultAddressPrologue16,
address_epilogue = kDefaultAddressEpilogue16,
data_prologue = kDefaultDataPrologue16,
data_epilogue = kDefaultDataEpilogue16,
sector_order = kDefaultSectorOrder16,
nibble_translate_table = kDefaultNibbleTranslationTable16):
self.sectors_per_track = sectors_per_track
self.address_prologue = address_prologue
self.address_epilogue = address_epilogue
self.data_prologue = data_prologue
self.data_epilogue = data_epilogue
self.sector_order = sector_order
self.nibble_translate_table = nibble_translate_table
self.g = g
self.logical_track_num = 0
def seek(self, logical_track_num):
self.logical_track_num = logical_track_num
return float(logical_track_num)
def reorder_to_logical_sectors(self, physical_sectors):
logical = {}
for k, v in physical_sectors.items():
logical[self.sector_order[k]] = v
return logical
def find_address_prologue(self, track):
return track.find(self.address_prologue)
def address_field_at_point(self, track):
volume = decode44(next(track.nibble()), next(track.nibble()))
track_num = decode44(next(track.nibble()), next(track.nibble()))
sector_num = decode44(next(track.nibble()), next(track.nibble()))
checksum = decode44(next(track.nibble()), next(track.nibble()))
return AddressField(volume, track_num, sector_num, checksum)
def verify_nibbles_at_point(self, track, nibbles):
found = []
for i in nibbles:
found.append(next(track.nibble()))
return tuple(found) == tuple(nibbles)
def verify_address_epilogue_at_point(self, track, logical_track_num, physical_sector_num):
return self.verify_nibbles_at_point(track, self.address_epilogue)
def find_data_prologue(self, track, logical_track_num, physical_sector_num):
return track.find(self.data_prologue)
def data_field_at_point(self, track, logical_track_num, physical_sector_num):
disk_nibbles = []
for i in range(343):
disk_nibbles.append(next(track.nibble()))
checksum = 0
secondary = []
decoded = []
for i in range(86):
n = disk_nibbles[i]
if n not in self.nibble_translate_table: return None
b = self.nibble_translate_table[n]
if b >= 0x80: return None
checksum ^= b
secondary.insert(0, checksum)
for i in range(86, 342):
n = disk_nibbles[i]
if n not in self.nibble_translate_table: return None
b = self.nibble_translate_table[n]
if b >= 0x80: return None
checksum ^= b
decoded.append(checksum << 2)
n = disk_nibbles[i]
if n not in self.nibble_translate_table: return None
b = self.nibble_translate_table[n]
if b >= 0x80: return None
checksum ^= b
for i in range(86):
low2 = secondary[85 - i]
decoded[i] += (((low2 & 0b000001) << 1) + ((low2 & 0b000010) >> 1))
decoded[i + 86] += (((low2 & 0b000100) >> 1) + ((low2 & 0b001000) >> 3))
if i < 84:
decoded[i + 172] += (((low2 & 0b010000) >> 3) + ((low2 & 0b100000) >> 5))
return bytearray(decoded)
def verify_data_epilogue_at_point(self, track, logical_track_num, physical_sector_num):
return self.verify_nibbles_at_point(track, self.data_epilogue)
def decode_track(self, track, logical_track_num, burn=0):
sectors = OrderedDict()
if not track: return sectors
if not track.bits: return sectors
starting_revolutions = track.revolutions
verified_sectors = []
while (len(verified_sectors) < self.sectors_per_track) and \
(track.revolutions < starting_revolutions + 2):
# store start index within track (used for .woz conversion)
start_bit_index = track.bit_index
if not self.find_address_prologue(track):
# if we can't even find a single address prologue, just give up
self.g.logger.debug("can't find a single address prologue so LGTM or whatever")
break
# for .woz conversion, only save some of the bits preceding
# the address prologue
if track.bit_index - start_bit_index > 256:
start_bit_index = track.bit_index - 256
# decode address field
address_field = self.address_field_at_point(track)
self.g.logger.debug("found sector %s" % hex(address_field.sector_num)[2:].upper())
if address_field.sector_num in verified_sectors:
# the sector we just found is a sector we've already decoded
# properly, so skip it
self.g.logger.debug("duplicate sector %d, continuing" % address_field.sector_num)
continue
if address_field.sector_num > self.sectors_per_track:
# found a weird sector whose ID is out of range
# TODO: will eventually need to tweak this logic to handle Ultima V and others
self.g.logger.debug("sector ID out of range %d" % address_field.sector_num)
continue
# put a placeholder for this sector in this position in the ordered dict
# so even if this copy doesn't pan out but a later copy does, sectors
# will still be in the original order
sectors[address_field.sector_num] = None
if not self.verify_address_epilogue_at_point(track, logical_track_num, address_field.sector_num):
# verifying the address field epilogue failed, but this is
# not necessarily fatal because there might be another copy
# of this sector later
self.g.logger.debug("verify_address_epilogue_at_point failed, continuing")
continue
if not self.find_data_prologue(track, logical_track_num, address_field.sector_num):
# if we can't find a data field prologue, just give up
self.g.logger.debug("find_data_prologue failed, giving up")
break
# read and decode the data field, and verify the data checksum
decoded = self.data_field_at_point(track, logical_track_num, address_field.sector_num)
if not decoded:
# decoding data field failed, but this is not necessarily fatal
# because there might be another copy of this sector later
self.g.logger.debug("data_field_at_point failed, continuing")
continue
if not self.verify_data_epilogue_at_point(track, logical_track_num, address_field.sector_num):
# verifying the data field epilogue failed, but this is
# not necessarily fatal because there might be another copy
# of this sector later
self.g.logger.debug("verify_data_epilogue_at_point failed")
continue
# store end index within track (used for .woz conversion)
end_bit_index = track.bit_index
# if the caller told us to burn a certain number of sectors before
# saving the good ones, do it now (used for .woz conversion)
if burn:
burn -= 1
continue
# all good, and we want to save this sector, so do it
sectors[address_field.sector_num] = Sector(address_field, decoded, start_bit_index, end_bit_index)
verified_sectors.append(address_field.sector_num)
self.g.logger.debug("saved sector %s" % hex(address_field.sector_num))
# remove placeholders of sectors that we found but couldn't decode properly
# (made slightly more difficult by the fact that we're trying to remove
# elements from an OrderedDict while iterating through the OrderedDict,
# which Python really doesn't want to do)
while None in sectors.values():
for k in sectors:
if not sectors[k]:
del sectors[k]
break
return sectors
def enough(self, logical_track_num, physical_sectors):
return len(physical_sectors) == self.sectors_per_track
from .universal import *
from .dos33 import *
from .sunburst import *
from .border import *
from .d5timing import *
from .infocom import *
from .optimum import *
from .hereditydog import *
from .beca import *
from .laureate import *
from .mecc import *

View file

@ -1,35 +0,0 @@
from passport.rwts.dos33 import DOS33RWTS
class BECARWTS(DOS33RWTS):
def is_protected_sector(self, logical_track_num, physical_sector_num):
if logical_track_num > 0: return True
return physical_sector_num not in (0x00, 0x0D, 0x0B, 0x09, 0x07, 0x05, 0x03, 0x01, 0x0E, 0x0C)
def reset(self, logical_sectors):
DOS33RWTS.reset(self, logical_sectors)
self.data_prologue = self.data_prologue[:2]
def verify_address_epilogue_at_point(self, track, logical_track_num, physical_sector_num):
if self.is_protected_sector(logical_track_num, physical_sector_num):
return DOS33RWTS.verify_address_epilogue_at_point(self, track, logical_track_num, physical_sector_num)
return True
def find_data_prologue(self, track, logical_track_num, physical_sector_num):
if not DOS33RWTS.find_data_prologue(self, track, logical_track_num, physical_sector_num):
return False
next(track.nibble())
if self.is_protected_sector(logical_track_num, physical_sector_num):
next(track.bit())
next(track.nibble())
next(track.bit())
next(track.bit())
return True
def verify_data_epilogue_at_point(self, track, logical_track_num, physical_sector_num):
if self.is_protected_sector(logical_track_num, physical_sector_num):
next(track.nibble())
if logical_track_num == 0:
next(track.nibble())
next(track.nibble())
return True
return DOS33RWTS.verify_data_epilogue_at_point(self, track, logical_track_num, physical_sector_num)

View file

@ -1,16 +0,0 @@
from passport.rwts.dos33 import DOS33RWTS
class BorderRWTS(DOS33RWTS):
# TODO doesn't work yet, not sure why
def reset(self, logical_sectors):
DOS33RWTS.reset(self, logical_sectors)
self.address_prologue = (logical_sectors[9][0x16],
logical_sectors[9][0x1B],
logical_sectors[9][0x20])
self.address_epilogue = (logical_sectors[9][0x25],
logical_sectors[9][0x2A])
self.data_prologue = (logical_sectors[8][0xFD],
logical_sectors[9][0x02],
logical_sectors[9][0x02])
self.data_epilogue = (logical_sectors[9][0x0C],
logical_sectors[9][0x11])

View file

@ -1,22 +0,0 @@
from passport.rwts.dos33 import DOS33RWTS
class D5TimingBitRWTS(DOS33RWTS):
def reset(self, logical_sectors):
DOS33RWTS.reset(self, logical_sectors)
self.data_prologue = (logical_sectors[2][0xE7],
0xAA,
logical_sectors[2][0xFC])
self.data_epilogue = (logical_sectors[3][0x35],
0xAA)
def find_address_prologue(self, track):
starting_revolutions = track.revolutions
while (track.revolutions < starting_revolutions + 2):
if next(track.nibble()) == 0xD5:
bit = next(track.bit())
if bit == 0: return True
track.rewind(1)
return False
def verify_address_epilogue_at_point(self, track, logical_track_num, physical_sector_num):
return True

View file

@ -1,29 +0,0 @@
from passport.rwts import RWTS
class DOS33RWTS(RWTS):
def __init__(self, logical_sectors, g):
self.g = g
self.reset(logical_sectors)
RWTS.__init__(self,
g,
sectors_per_track=16,
address_prologue=self.address_prologue,
address_epilogue=self.address_epilogue,
data_prologue=self.data_prologue,
data_epilogue=self.data_epilogue,
nibble_translate_table=self.nibble_translate_table)
def reset(self, logical_sectors):
self.address_prologue = (logical_sectors[3][0x55],
logical_sectors[3][0x5F],
logical_sectors[3][0x6A])
self.address_epilogue = (logical_sectors[3][0x91],
logical_sectors[3][0x9B])
self.data_prologue = (logical_sectors[2][0xE7],
logical_sectors[2][0xF1],
logical_sectors[2][0xFC])
self.data_epilogue = (logical_sectors[3][0x35],
logical_sectors[3][0x3F])
self.nibble_translate_table = {}
for nibble in range(0x96, 0x100):
self.nibble_translate_table[nibble] = logical_sectors[4][nibble]

View file

@ -1,21 +0,0 @@
from passport.rwts.dos33 import DOS33RWTS
class HeredityDogRWTS(DOS33RWTS):
def data_field_at_point(self, track, logical_track_num, physical_sector_num):
if (logical_track_num, physical_sector_num) == (0x00, 0x0A):
# This sector is fake, full of too many consecutive 0s,
# designed to read differently every time. We go through
# and clean the stray bits, and be careful not to go past
# the end so we don't include the next address prologue.
start_index = track.bit_index
while (track.bit_index < start_index + (343*8)):
if self.nibble_translate_table.get(next(track.nibble()), 0xFF) == 0xFF:
track.bits[track.bit_index-8:track.bit_index] = 0
self.g.found_and_cleaned_weakbits = True
return bytearray(256)
return DOS33RWTS.data_field_at_point(self, track, logical_track_num, physical_sector_num)
def verify_data_epilogue_at_point(self, track, logical_track_num, physical_sector_num):
if (logical_track_num, physical_sector_num) == (0x00, 0x0A):
return True
return DOS33RWTS.verify_data_epilogue_at_point(self, track, logical_track_num, physical_sector_num)

View file

@ -1,11 +0,0 @@
from passport.rwts.dos33 import DOS33RWTS
class InfocomRWTS(DOS33RWTS):
def reset(self, logical_sectors):
DOS33RWTS.reset(self, logical_sectors)
self.data_prologue = self.data_prologue[:2]
def find_data_prologue(self, track, logical_track_num, physical_sector_num):
if not DOS33RWTS.find_data_prologue(self, track, logical_track_num, physical_sector_num):
return False
return next(track.nibble()) >= 0xAD

View file

@ -1,22 +0,0 @@
from passport.rwts.dos33 import DOS33RWTS
class LaureateRWTS(DOS33RWTS):
# nibble table is in T00,S06
# address prologue is T00,S05 A$55,A$5F,A$6A
# address epilogue is T00,S05 A$91,A$9B
# data prologue is T00,S04 A$E7,A$F1,A$FC
# data epilogue is T00,S05 A$35,A$3F
def reset(self, logical_sectors):
self.address_prologue = (logical_sectors[5][0x55],
logical_sectors[5][0x5F],
logical_sectors[5][0x6A])
self.address_epilogue = (logical_sectors[5][0x91],
logical_sectors[5][0x9B])
self.data_prologue = (logical_sectors[4][0xE7],
logical_sectors[4][0xF1],
logical_sectors[4][0xFC])
self.data_epilogue = (logical_sectors[5][0x35],
logical_sectors[5][0x3F])
self.nibble_translate_table = {}
for nibble in range(0x96, 0x100):
self.nibble_translate_table[nibble] = logical_sectors[6][nibble]

View file

@ -1,40 +0,0 @@
from passport.rwts.dos33 import DOS33RWTS
class MECCRWTS(DOS33RWTS):
# MECC fastloaders
def __init__(self, mecc_variant, logical_sectors, g):
g.mecc_variant = mecc_variant
DOS33RWTS.__init__(self, logical_sectors, g)
def reset(self, logical_sectors):
self.nibble_translate_table = self.kDefaultNibbleTranslationTable16
self.address_epilogue = (0xDE, 0xAA)
self.data_epilogue = (0xDE, 0xAA)
if self.g.mecc_variant == 1:
self.address_prologue = (logical_sectors[0x0B][0x08],
logical_sectors[0x0B][0x12],
logical_sectors[0x0B][0x1D])
self.data_prologue = (logical_sectors[0x0B][0x8F],
logical_sectors[0x0B][0x99],
logical_sectors[0x0B][0xA3])
elif self.g.mecc_variant == 2:
self.address_prologue = (logical_sectors[7][0x83],
logical_sectors[7][0x8D],
logical_sectors[7][0x98])
self.data_prologue = (logical_sectors[7][0x15],
logical_sectors[7][0x1F],
logical_sectors[7][0x2A])
elif self.g.mecc_variant == 3:
self.address_prologue = (logical_sectors[0x0A][0xE8],
logical_sectors[0x0A][0xF2],
logical_sectors[0x0A][0xFD])
self.data_prologue = (logical_sectors[0x0B][0x6F],
logical_sectors[0x0B][0x79],
logical_sectors[0x0B][0x83])
elif self.g.mecc_variant == 4:
self.address_prologue = (logical_sectors[8][0x83],
logical_sectors[8][0x8D],
logical_sectors[8][0x98])
self.data_prologue = (logical_sectors[8][0x15],
logical_sectors[8][0x1F],
logical_sectors[8][0x2A])

View file

@ -1,16 +0,0 @@
from passport.rwts.dos33 import DOS33RWTS
class OptimumResourceRWTS(DOS33RWTS):
def data_field_at_point(self, track, logical_track_num, physical_sector_num):
if (logical_track_num, physical_sector_num) == (0x01, 0x0F):
# TODO actually decode these
disk_nibbles = []
for i in range(343):
disk_nibbles.append(next(track.nibble()))
return bytearray(256) # all zeroes for now
return DOS33RWTS.data_field_at_point(self, track, logical_track_num, physical_sector_num)
def verify_data_epilogue_at_point(self, track, logical_track_num, physical_sector_num):
if (logical_track_num, physical_sector_num) == (0x01, 0x0F):
return True
return DOS33RWTS.verify_data_epilogue_at_point(self, track, logical_track_num, physical_sector_num)

View file

@ -1,32 +0,0 @@
from passport.rwts.dos33 import DOS33RWTS
class SunburstRWTS(DOS33RWTS):
def reset(self, logical_sectors):
DOS33RWTS.reset(self, logical_sectors)
self.address_epilogue = (logical_sectors[3][0x91],)
self.data_epilogue = (logical_sectors[3][0x35],)
self.address_prologue_third_nibble_by_track = logical_sectors[4][0x29:]
self.data_prologue_third_nibble_by_track = logical_sectors[4][0x34:]
def seek(self, logical_track_num):
self.address_prologue = (self.address_prologue[0],
self.address_prologue[1],
self.address_prologue_third_nibble_by_track[logical_track_num])
self.data_prologue = (self.data_prologue[0],
self.data_prologue[1],
self.data_prologue_third_nibble_by_track[logical_track_num])
DOS33RWTS.seek(self, logical_track_num)
if logical_track_num == 0x11:
self.sector_order = (0x00, 0x07, 0x08, 0x06, 0x0D, 0x05, 0x0C, 0x04, 0x0B, 0x03, 0x0A, 0x02, 0x09, 0x01, 0x08, 0x0F)
else:
self.sector_order = self.kDefaultSectorOrder16
if logical_track_num >= 0x11:
return logical_track_num + 0.5
else:
return float(logical_track_num)
def enough(self, logical_track_num, physical_sectors):
if logical_track_num == 0x11:
return len(physical_sectors) >= 14
return DOS33RWTS.enough(self, logical_track_num, physical_sectors)

View file

@ -1,63 +0,0 @@
from passport.rwts import RWTS
class UniversalRWTS(RWTS):
acceptable_address_prologues = ((0xD4,0xAA,0x96), (0xD5,0xAA,0x96))
def __init__(self, g):
RWTS.__init__(self, g, address_epilogue=[], data_epilogue=[])
def find_address_prologue(self, track):
starting_revolutions = track.revolutions
seen = [0,0,0]
while (track.revolutions < starting_revolutions + 2):
del seen[0]
seen.append(next(track.nibble()))
if tuple(seen) in self.acceptable_address_prologues: return True
return False
def verify_address_epilogue_at_point(self, track, logical_track_num, physical_sector_num):
# return True
if not self.address_epilogue:
self.address_epilogue = [next(track.nibble())]
result = True
else:
result = RWTS.verify_address_epilogue_at_point(self, track, logical_track_num, physical_sector_num)
next(track.nibble())
next(track.nibble())
return result
def verify_data_epilogue_at_point(self, track, logical_track_num, physical_sector_num):
if not self.data_epilogue:
self.data_epilogue = [next(track.nibble())]
result = True
else:
result = RWTS.verify_data_epilogue_at_point(self, track, logical_track_num, physical_sector_num)
next(track.nibble())
next(track.nibble())
return result
class UniversalRWTSIgnoreEpilogues(UniversalRWTS):
def verify_address_epilogue_at_point(self, track, logical_track_num, physical_sector_num):
return True
def verify_data_epilogue_at_point(self, track, logical_track_num, physical_sector_num):
return True
class Track00RWTS(UniversalRWTSIgnoreEpilogues):
def data_field_at_point(self, track, logical_track_num, physical_sector_num):
start_index = track.bit_index
start_revolutions = track.revolutions
decoded = UniversalRWTS.data_field_at_point(self, track, logical_track_num, physical_sector_num)
if not decoded:
# If the sector didn't decode properly, rewind to the
# beginning of the data field before returning to the
# caller. This is for disks with a fake T00,S0A that
# is full of consecutive 0s, where if we consume the bitstream
# as nibbles, we'll end up consuming the next address field
# and it will seem like that sector doesn't exist. And that
# is generally logical sector 2, which is important not to
# miss at this stage because its absence triggers a different
# code path and everything falls apart.
track.bit_index = start_index
track.revolutions = start_revolutions
return decoded

View file

@ -1,142 +0,0 @@
__date__ = "2019-03-03"
STRINGS = {
"header": "Passport.py by 4am (" + __date__ + ")\n", # max 32 characters
"reading": "Reading from {filename}\n",
"diskrwts": "Using disk's own RWTS\n",
"bb00": "T00,S05 Found $BB00 protection check\n"
"T00,S0A might be unreadable\n",
"sunburst": "T00,S04 Found Sunburst disk\n"
"T11,S0F might be unreadable\n",
"optimum": "T00,S00 Found Optimum Resource disk\n"
"T01,S0F might be unreadable\n",
"builtin": "Using built-in RWTS\n",
"switch": "T{track},S{sector} Switching to built-in RWTS\n",
"writing": "Writing to {filename}\n",
"unformat": "T{track} is unformatted\n",
"f7": "T{track} Found $F7F6EFEEAB protection track\n",
"sync": "T{track} Found nibble count protection track\n",
"optbad": "T{track},S{sector} is unreadable (ignoring)\n",
"passver": "Verification complete. The disk is good.\n",
"passdemuf": "Demuffin complete.\n",
"passcrack": "Crack complete.\n",
"passcrack0": "\n"
"The disk was copied successfully, but\n"
"Passport did not apply any patches.\n\n"
"Possible reasons:\n"
"- The source disk is not copy protected.\n"
"- The target disk works without patches.\n"
"- The disk uses an unknown protection,\n"
" and Passport can not help any further.\n",
"fail": "\n"
"T{track},S{sector} Fatal read error\n\n",
"fatal0000": "\n"
"Possible reasons:\n"
"- The source file does not exist.\n"
"- This is not an Apple ][ disk.\n"
"- The disk is 13-sector only.\n"
"- The disk is unformatted.\n\n",
"fatal220f": "\n"
"Passport does not work on this disk.\n\n"
"Possible reasons:\n"
"- This is not a 13- or 16-sector disk.\n"
"- The disk modifies its RWTS in ways\n"
" that Passport is not able to detect.\n\n",
"modify": "T{track},S{sector},${offset}: {old_value} -> {new_value}\n",
"dos33boot0": "T00,S00 Found DOS 3.3 bootloader\n",
"dos32boot0": "T00,S00 Found DOS 3.2 bootloader\n",
"prodosboot0": "T00,S00 Found ProDOS bootloader\n",
"pascalboot0": "T00,S00 Found Pascal bootloader\n",
"mecc": "T00,S00 Found MECC bootloader\n",
"sierra": "T{track},S{sector} Found Sierra protection check\n",
"a6bc95": "T{track},S{sector} Found A6BC95 protection check\n",
"jmpbcf0": "T00,S03 RWTS requires a timing bit after\n"
"the first data epilogue by jumping to\n"
"$BCF0.\n",
"rol1e": "T00,S03 RWTS accumulates timing bits in\n"
"$1E and checks its value later.\n",
"runhello": "T{track},S{sector} Startup program executes a\n"
"protection check before running the real\n"
"startup program.\n",
"e7": "T{track},S{sector} Found E7 bitstream\n",
"jmpb4bb": "T{track},S{sector} Disk calls a protection check at\n"
"$B4BB before initializing DOS.\n",
"jmpb400": "T{track},S{sector} Disk calls a protection check at\n"
"$B400 before initializing DOS.\n",
"jmpbeca": "T00,S02 RWTS requires extra nibbles and\n"
"timing bits after the data prologue by\n"
"jumping to $BECA.\n",
"jsrbb03": "T00,S05 Found a self-decrypting\n"
"protection check at $BB03.\n",
"thunder": "T00,S03 RWTS counts timing bits and\n"
"checks them later.\n",
"jmpae8e": "T00,S0D Disk calls a protection check at\n"
"$AE8E after initializing DOS.\n",
"diskvol": "T00,S08 RWTS requires a non-standard\n"
"disk volume number.\n",
"d5d5f7": "T{track},S{sector} Found D5D5F7 protection check\n",
"construct": "T01,S0F Reconstructing missing data\n",
"datasoftb0": "T00,S00 Found Datasoft bootloader\n",
"datasoft": "T{track},S{sector} Found Datasoft protection check\n",
"lsr6a": "T{track},S{sector} RWTS accepts $D4 or $D5 for the\n"
"first address prologue nibble.\n",
"bcs08": "T{track},S{sector} RWTS accepts $DE or a timing bit\n"
"for the first address epilogue nibble.\n",
"jmpb660": "T00,S02 RWTS requires timing bits after\n"
"the data prologue by jumping to $B660.\n",
"protdos": "T00,S01 Found encrypted RWTS, key=${key}\n",
"protdosw": "T00 Decrypting RWTS before writing\n",
"protserial": "T{track},S{sector} Erasing serial number {serial}\n",
"fbff": "T{track},S{sector} Found FBFF protection check\n",
"encoded44": "\n"
"T00,S00 Fatal error\n\n"
"Passport does not work on this disk,\n"
"because it uses a 4-and-4 encoding.\n",
"encoded53": "\n"
"T00,S00 Fatal error\n\n"
"Passport does not work on this disk,\n"
"because it uses a 5-and-3 encoding.\n",
"specdel": "T00,S00 Found DOS 3.3P bootloader\n",
"bytrack": "T{track},S{sector} RWTS changes based on track\n",
"a5count": "T{track},S{sector} Found A5 nibble count\n",
"restart": "Restarting scan\n",
"corrupter": "T13,S0E Protection check intentionally\n"
"destroys unauthorized copies\n",
"eaboot0": "T00 Found Electronic Arts bootloader\n",
"eatrk6": "T06 Found EA protection track\n",
"poke": "T{track},S{sector} BASIC program POKEs protection\n"
"check into memory and CALLs it.\n",
"bootcounter": "T{track},S{sector} Original disk destroys itself\n"
"after a limited number of boots.\n",
"milliken": "T00,S0A Found Milliken protection check\n"
"T02,S05 might be unreadable\n",
"jsr8b3": "T00,S00 Found JSR $08B3 bootloader\n",
"daviddos": "T00,S00 Found David-DOS bootloader\n",
"quickdos": "T00,S00 Found Quick-DOS bootloader\n",
"diversidos": "T00,S00 Found Diversi-DOS bootloader\n",
"prontodos": "T00,S00 Found Pronto-DOS bootloader\n",
"jmpb412": "T02,S00 Disk calls a protection check\n"
"at $B412 before initializing DOS.\n",
"laureate": "T00,S00 Found Laureate bootloader\n",
"bbf9": "T{track},S{sector} Found BBF9 protection check\n",
"micrograms": "T00,S00 Found Micrograms bootloader\n",
"cmpbne0": "T{track},S{sector} RWTS accepts any value for the\n"
"first address epilogue nibble.\n",
"d5timing": "T{track},S{sector} RWTS accepts $D5 plus a timing\n"
"bit as the entire address prologue.\n",
"advint": "T{track},S{sector} Found Adventure International\n"
"protection check\n",
"bootwrite": "T00,S00 Writing Standard Delivery\n"
"bootloader\n",
"rwtswrite": "T00,S02 Writing built-in RWTS\n",
"rdos": "T00,S00 Found RDOS bootloader\n",
"sra": "T{track},S{sector} Found SRA protection check\n",
"muse": "T00,S08 RWTS doubles every sector ID\n",
"origin": "T{track},S{sector} RWTS alters the sector ID if the\n"
"address epilogue contains a timing bit.\n",
"volumename": "T{track},S{sector} Volume name is ", # no \n
"dinkeydos": "T00,S0B Found Dinkey-DOS\n",
"trillium": "T{track},S{sector} Found Trillium protection check\n",
"tamper": "T{track},S{sector} Found anti-tamper check\n",
"microfun": "T{track},S{sector} Found Micro Fun protection check\n",
}

14
pyproject.toml Normal file
View file

@ -0,0 +1,14 @@
# SPDX-FileCopyrightText: 2022 Jeff Epler for Adafruit Industries
#
# SPDX-License-Identifier: MIT
[build-system]
requires = [
"setuptools>=45",
"setuptools_scm[toml]>=6.0",
"wheel",
]
build-backend = "setuptools.build_meta"
[tool.setuptools_scm]
write_to = "a2woz/__version__.py"

3
requirements.txt Normal file
View file

@ -0,0 +1,3 @@
# SPDX-FileCopyrightText: 2022 Jeff Epler for Adafruit Industries
#
# SPDX-License-Identifier: MIT

33
setup.cfg Normal file
View file

@ -0,0 +1,33 @@
# SPDX-FileCopyrightText: 2022 Jeff Epler for Adafruit Industries
#
# SPDX-License-Identifier: MIT
[metadata]
name = a2woz
author = Jeff Epler for Adafruit Industries
author_email = jeff@adafruit.com
description = Visualize floppy disk flux patterns
long_description = file: README.md
long_description_content_type = text/markdown
url = https://github.com/adafruit/a2woz
classifiers =
Programming Language :: Python :: 3
Programming Language :: Python :: 3.9
Programming Language :: Python :: 3.10
Programming Language :: Python :: Implementation :: CPython
License :: OSI Approved :: MIT License
Operating System :: OS Independent
[options]
package_dir =
=.
packages =
a2woz
a2woz.loggers
a2woz.util
python_requires = >=3.9
install_requires =
[options.entry_points]
console_scripts =
a2woz = a2woz.__main__:main