Hi Curtis, et al.<div><br></div><div>I've started going through the IJ-Commands-All google doc to testing each command by hand. The plan was/is:</div><br>for each command:<br><ul><li>Manually test in IJ1 & IJ2 simultaneously making sure to try it in different contexts</li>
<li>Record actions as a macro.</li><li>Update IJ-Commands-All doc (did anything break in IJ2?)</li><li>Add macro text to test script</li></ul><div>However, I'm already running into several bugs that aren't necessarily the fault of the command itself, and some that simply can't be demonstrated via a macro. In these cases, I've been writing down a description of the bug and how to reproduce it, but a text document on my local machine is hardly the place for this list to reside. Still, I'm hesitant to just start filing lots of bugs in the imagejdev trac. Does anyone have suggestions for how to approach this methodically so the issues get seen by someone who can determine their severity and fix/ignore/dismiss them as they see fit?</div>
<div><br></div><div>-Adam</div>