Name It!

The difference between data and actuallyUsefulReference.

This is the 13th post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

You’ll often hear that it is important to add comments to your code. I don’t think that is quite true.

I see comments as a means to an end. The goal is to make the code more readable, and comments are just one way to try and accomplish the desired result, which is readable code.

Rule #13: Put effort into naming things.

One of my recurring messages is: have some coding convention and be consistent. A convention for naming things is part of the coding convention.

For whatever they’re worth I’ll describe some of my naming conventions, which reflect my personal quirks and preferences. Don’t take what follows as gospel.

Comments

The problem with comments is that they are not coupled to the code.

Their only relation to the code is their position within the code. That’s it.

Initially, a comment will clarify the code.

Often the code then changes, and the interspersed comment gets overlooked and remains unchanged.

Finally, what starts out as a helpful comment inadvertently becomes an unhelpful lie.

At best, a comment is a liability: the software developer is liable to keep them updated as the code evolves.

At worst, comments are booby traps that cause harm at undesirable moments.

Better Than Comments

I put a lot of effort in judiciously naming and renaming the various elements on my code, as there are: variables, functions, constants…

I find it useful to re-read and tidy up my code at most a few days after writing it. By then, I still remember enough about how it all works, but any details that are not obvious will become apparent to me as I try to make sense of what I’ve written.

A helpful use of comments is to use structured comments to mix API documentation into the code. I can run a documentation-generation tool to extract that info into readable documentation. That works great.

In rare circumstances, I’ll use a comment to point out a ‘gotcha’: areas where the code looks like other code, but is subtly different. Alert the reader to the fact that there is some detail they might not expect.

Apart from structured documentation and gotcha comments, I will only add a comment if I really, really cannot make the code self-explanatory by reshuffling statements, renaming elements, adding white space, dividing into smaller chunks, grouping larger actions into bite-sized thoughts…

Naming

Things I consider when naming things: I’ll rarely, if ever use variable names like i, j, idx, count,… Those names convey very little meaning.

I don’t have a rigid approach to how the names are spelled out. UPPER_CASE, lower_case, camelCase, first letter upper- or lowercase… that all depends on the project and language at hand. There might be existing ‘best practices’, which I’ll try to follow.

Like As Like

What I am rigid about is the consistency. I want similar elements to be named similarly, to draw attention to the fact that they’re similar. Below: yes are variable names I might have used in my code, no are inconsistently named variables I’ll try to avoid using within the same project.

It’s not about the names themselves, but rather about the consistency in how the names are chosen.

yes: authorPtr vs. bookPtr
no: authorPtr and pBook
yes: numParagraphs and numCharacters
no: numParagraphs and charCount
yes: paraCount and charCount
no: numPara and charCount
yes: STATE_IDLE and STATE_EXPECTING_LETTER
no: IdleState and STATE_EXPECTING_LETTER

Depending on the context (language, complexity of the project), I might also cram some ‘meta-info’ into the variable name.

Parameters

Most of the time, I prefer to use call-by-value behavior, and use return to hand back results, but for some projects, I use call-by-reference and have some more complicated mechanism for passing data in and out of methods.

In those cases, I might use a name prefix (in_…, io_…, out_…) on function parameters to differentiate parameters that are ‘by value’ vs parameters that are ‘by reference’.

Pseudocode example:

function doSomething(in_someValue, io_someReference, out_someResult) {
...
}

in_... means: used for data being passed in.
out_... means: used for data being returned.
io_... means: data being passed in, then modified in method, so different data is being returned.

I will express things like ‘const-ness’ through language features of the language at hand.

Meta-Information In Names

I don’t use Hungarian Notation, but I do use some similar ideas, where the name of the entity reflects some additional meta-information about it, meta-information that helps a human interpreter of the code understand some of the less-obvious relations in the code.

An example: I have variables that store JSON strings, and corresponding variables that store a deserialized object. In my code I will use names like thingJSON and thing which indicate to me that thingJSON is a string which can be passed into some JSON parser, and thing is the deserialized result.

For pointers to things I might use either a p... prefix or a ...Ptr suffix – mostly depending on whatever convention is already in force throughout the project. I have no real preference for one over the other. The main thing is: it needs to be consistent throughout the project.

I’ve spent a lot of time writing C and C++ code, and one habit I formed is to use ALL_UPPERCASE for constants and macros. It’s a habit, in my opinion not better nor worse than using prefixes like c... or k... for global constants.

Booleans

In the same vein, I will try to name variables and functions that handle boolean values such that their name reflects their boolean nature. isValid, hasDelimiter, isCachedValueAvailable,…

Collections

With collections, I’ll try to reflect the type and structure of the collection. Instead of a bland variable name like users which does not hint at how those users are stored, I’ll prefer to use variable names like userList, userMapByName, userSet… to hint at the underlying data structure.

Namespacing

I also like using some namespacing mechanism, especially for stuff that is globally shared.

Rather than make everything global, I prefer to use namespacing techniques. Some languages (e.g. C++, Java) have namespacing features built-in and then I’ll use those.

Other languages don’t have namespacing in which case I’ll use some approximation – e.g. in JavaScript I’ll use global objects or functions and stash stuff in them. In Xojo I’ll use modules or classes to stash global objects.

This increases the odds that the code is re-usable and reduces the risk of a clash with other code.

Some sample code from a project generated by CEPSparker:

if (! SPRK.C) {
    SPRK.C = {}; // stash constants here   
}

...

SPRK.C.APP_CODE_AFTER_EFFECTS                   = "AEFT";
SPRK.C.APP_CODE_BRIDGE                          = "KBRG";
SPRK.C.APP_CODE_DREAMWEAVER                     = "DRWV";
SPRK.C.APP_CODE_FLASH_PRO                       = "FLPR";
SPRK.C.APP_CODE_ILLUSTRATOR                     = "ILST";
SPRK.C.APP_CODE_INCOPY                          = "AICY";
SPRK.C.APP_CODE_INDESIGN                        = "IDSN";
SPRK.C.APP_CODE_PHOTOSHOP                       = "PHXS";
SPRK.C.APP_CODE_PHOTOSHOP_OLD                   = "PHSP";
SPRK.C.APP_CODE_PRELUDE                         = "PRLD";
SPRK.C.APP_CODE_PREMIERE_PRO                    = "PPRO";

Aligning and Sorting

Some people prefer their code editor to do the formatting for them, but I like to format by hand. I like vertically aligned code and monospaced fonts. I also like to have similar things alphabetically sorted.

Modern code editors like Sublime Edit or VSCode make it very easy to keep things sorted and aligned.

This helps me visually spot discrepancies. Imagine I added a new constant and made a consistency mistake, like:

SPRK.C.AP_CODE_EXPRESS                          = "EXPRESS";

(note AP_ instead of APP_), it would visually stand out like a sore thumb.

Intermediate Results

A powerful technique is to store intermediate results into temporary variables with meaningful names.

Rather than write out long, complicated expressions, I’ll evaluate and store the subexpressions and then combine them in a final expression.

This has two advantages:
– it can explain the code better, without need for a comment
– it makes the code easier to debug

During a debug session, I can inspect the intermediate result, rather than being forced into an all-or-nothing situation.

Sample snippet in JavaScript: instead of

        var padding = new Array(len - retVal.length + 1).join(padChar)
        retVal = padding + retVal;

I’ll write:

        var padLength = len - retVal.length;

        var padding = new Array(padLength + 1).join(padChar)
        retVal = padding + retVal;

Regular Expressions

Naming regular expressions can also be helpful.

Regular expressions are notoriously ‘write once, read never’ constructs.

Once you’ve figured it out, you never want to dissect it again. Using named constants will help make the code readable. JavaScript example:

const REGEXP_TRIM                              = /^\s*(\S?.*?)\s*$/;

Depending on the context, there can also be a performance benefit: regular expressions need to be compiled into an internal representation, which can be performance-intensive, and using a named constant rather than a literal can be faster because the regular expression only needs to be compiled once.

Next

Putting some effort in naming coding elements in a consistent and helpful manner pays off.

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Writing Efficient Code Isn’t Always Technical

When it comes to writing efficient code, the choice of the algorithms can have tremendous consequences.

This is the 12th post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

Rule #12: The biggest efficiency gains happen before the first line of code.

Know Your Tools

Suppose I have a large ordered table of names, and I need to find a particular name, I could perform either a sequential search of the table, or I could use a binary search approach.

In many situations the binary search will be much, much faster than a sequential search.

The important thing is: when faced with a certain task, it helps to know of some basic patterns and algorithms that might be more efficient for that task.

This is where AI can help in a big way. I’ll fire up Claude or ChatGPT or DeepSeek or whatever, and have a conversation, and ask it to teach me about promising algorithms that I am not familiar with, making it a learning experience!

When You Only Have A Hammer…

Before applying any algorithm or pattern I try to take a step back and take the broader view.

For example, if the table is small, a binary search is often overkill, and can end up being substantially slower than a straight sequential search.

If the table isn’t already ordered, and sorting would be required just to use binary search, using binary search is often not worth it.

I try to avoid using a cannon to kill a mosquito.

If I know a table will only ever have, say, at most 5 elements in it, I won’t refactor that code with binary search or a B-tree.

Instead, I’ll translate my assumptions into a simple linear search and some sanity checks with logging, so the code will verify the table size and tell me should the table turn out to be larger than expected.

The important thing is to take a step back and do a bit of thinking and research.

Is It Worth Fretting Over?

It is important to look at the wider picture.

Sometimes a brute-force search is fine. Sometimes you need to rewrite your data model. The hard part isn’t coding efficiently. It’s knowing when efficiency matters.

Throwaway Code

I often write throwaway code, for example when I am doing a database conversion, or some data massage.

I regularly build programs or scripts that are only run a few times, then discarded.

I’ll only dig in when the potential time savings start surpassing the effort needed to implement them.

The difference between 0.1 sec and 120 sec execution time is not relevant, especially when compared to the extra effort to implement a more efficient algorithm.

On the other hand, if I have a script that needs 72 hours to execute and with a few hours work I can bring that down to 15 mins, that’s worth it.

Existing Third Party Code

There might be existing libraries/modules/source code examples that provide the functionality I need.

Whether I rely on existing third-party code depends on who made it, how much I need it, what it does, and how deeply nested and far reaching the dependencies are.

Pulling in external modules often equates quick relief, followed by long suffering.

Think of it: once I pull in external modules, I allow someone that is not me to have some control over my software.

I’ll try very hard to rely on as few external dependencies as possible. I will try to avoid using package managers in the Node.js eco-system.

The first reason: I want to avoid ‘update hell’: I hate it when I pick up a dormant project a few months later, and I am forced to spend a day or two to catch up and update my code for all the updates and deprecations in the external modules I was using.

Second reason: safety and security. The world of open source has changed for worse over the last few decades, and I don’t have much trust in such eco-systems.

Many of these ecosystems feel like houses of cards. One compromised package deep in the tree, and a Trojan horse walks right in.

A few examples to see what I am on about:

https://cloud.google.com/blog/topics/threat-intelligence/supply-chain-node-js/
https://blog.npmjs.org/post/180565383195/details-about-the-event-stream-incident
https://en.wikipedia.org/wiki/XZ_Utils_backdoor

Existing Private Code

For a wide range of functionality, I prefer to roll my own.

Over the decades, I’ve built a broad library of stable, reusable code that can easily be re-purposed within new projects.

I know my code is not bug-free, but it’s mature and stable, and I retain 100% control.

When I need some functionality that I have not covered yet, and I feel confident enough to write my own version, I’ll do just that and extend my private library.

Trusted Third Party Code

The other side of the coin: I also try to avoid the ‘not invented here syndrome’, or to bite off more than I can chew.

For example, I would not dream of writing my own crypto or compression modules.

There are just too many pitfalls and ways to mess things up.

Things like OpenSSL, libtiff, libjpeg, zlib, boost, MariaDB, PostgreSQL, SQLite… have less of the ‘wild west’ mentality than the Node.js eco-system.

They’re well maintained, mature and stable, and security issues tend to be resolved promptly. Breaking API changes are very rare.

I try to strike a balance between being too cautious and being too trusting by relying on just a few external libraries.

The way I currently approach external dependencies: I need to be able to tell you, off the top of my head, exactly what the external dependencies in my projects are.

If I cannot do that I know there are too many.

If I can do that, I know there will be only a few, and they’re easy to track.

With a Node.js package manager that would be rarely the case as the dependency trees are rooted way, way too deep and wide.

Next

In the end, efficiency isn’t just about shaving milliseconds. It’s about making thoughtful decisions that pay off across the full life of the project.

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Give Me Space!

The most important character in my code is whitespace (space, tab, newline…).

This is the 11th post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

Rule #11: Whitespace is structure, not decoration.

Adding more whitespace to my code does not slow down my code’s execution speed in any significant way.

But it can hugely speed up the comprehension speed of a human reader that needs to digest and understand my code.

Example

I asked Claude 3.7 to dig up some real-life examples; I picked this one to show what I mean. I don’t know where it came from, and that is not important.

bool isValidFile = (fileSize > 0 && fileSize < maxSize) && (fileType == "jpg" || fileType == "png" || fileType == "gif") && !(filename.contains("..") || filename.contains("/")) && (uploadedTime > lastMaintenanceTime) && (userId == ownerUserId || userRole == "admin" || (userRole == "editor" && sharedWithUser)) && !isCorrupt;

The actual formatting of a visually restructured version is a matter of personal preference; there are a million other good ways to format such expression. Assuming I’m only allowed to add whitespace, and change nothing else, here’s how I might restructure it.

bool isValidFile =
    (
        fileSize > 0 
    && 
        fileSize < maxSize
    ) 
&& 
    (
        fileType == "jpg" 
    || 
        fileType == "png" 
    || 
        fileType == "gif"
    ) 
&& 
    ! (
            filename.contains("..") 
        || 
            filename.contains("/")
    ) 
&& 
    (uploadedTime > lastMaintenanceTime) 
&&
    (
        userId == ownerUserId 
    || 
        userRole == "admin" 
    || 
        (
            userRole == "editor" 
        && 
            sharedWithUser
        )
    ) 
&& 
    ! isCorrupt;

Try to grok both statements.

The point is: the restructured version is easier to mentally consume. You can spot the various sub-expressions and easily digest each of the sub-expressions in turn.

Don’t Mislead

Adding more whitespace can be very helpful, but it can also be misleading, and I am extra careful and will triple-check my work to make sure the formatting matches the actual code structure.

Here are examples of code where poor formatting actively misleads the reader:

if (a > 5) 
    increase = a * 0.9;
    total += increase;

if (b > 7)
    increase = b * 0.87;
    total += increase; 
  
var x = 
    a > 2
&&
        c > 3
    ||
        d > 5;

// bug! y will be 3, not 7
var y = 3
    + 4; 

(I know the code does not make much sense, I am just making a point).

These could easily mislead a human reader to mis-interpret the code

In my own defensive coding style, I personally never use if statements without a { some code... } compound statement, even if it is just one statement.

Also, I will try to add parenthesis around sub-expressions, even when the operator precedence makes them superfluous.

if (a > 5) {
  increase = a * 0.9;
}
total += increase;

if (b > 7) {
  increase = b * 0.87;
}
total += increase; 
  
var x = 
    (
        a > 2
    &&
        c > 3
    )
||
    d > 5;

var y = 3 + 4;

Conclusion

Whitespace costs nothing, but the clarity it can add is priceless. Use it deliberately, but be careful to not use formatting that misleads the human reader or introduces bugs.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

InDesign, Tables, Scripts, Vibe Coding

Hairy Splits

A story about splitting InDesign tables. You can find out more about TableAxe here:

https://rorohiko.com/TableAxe

Video demo: https://youtu.be/kq2Ilomtgyw

Recently, I had to do some work with tables in InDesign, and had a need to split a table vertically, creating two tables.

I had not had that need before, and I blindly assumed there would be a menu option to do that.

Turns out… no, that does not seem to exist. Ah well, surely there will be a script somewhere that does that? Turns out… nothing that I can find. There are some scripts (I found this one by Peter Kahrel) that are related, but none that had the functionality I was after.

That feels like a real functionality gap to me!

Hey, why not try vibe coding and experience first hand how well that works?

Starting Simple

I used Claude 3.7 and explained what I needed.

I could have tried the amazing Omata Lab’s MATE, but I like to work closer to the metal, so I used ‘raw Claude’.

Claude confidently spit out an ExtendScript, and on quick diagonal reading it seemed to kind of make sense.

Tried to run the script – nah. That did not work. Inspect the script a bit closer, and it turns out Claude ‘imagined’ some handy new DOM methods that don’t actually exist in the real InDesign DOM.

Got into a ping-pong match with Claude, fixed one problem, created a new problem.

Game of whack-a-mole. Eventually I got a splitter script going, but I did not like the result very much, and it took me a bit longer than had I started from scratch on my own power.

This initial script by Claude had some useful tidbits in it, but the script as a whole felt like a one-trick pony.

One useful tidbit I learned: you can pass negative indices to InDesign collections to address elements at the end of the collection – so document.rectangles[-1] is the very last rectangle. Never too old to learn something new.

Put The Thinking Cap On

When I was looking for an existing script earlier, I found some scripts that could split tables horizontally, so I initially did not envision adding horizontal split functionality to my script.

But then I started thinking: what if I made a script that was a one-stop-shop for all kinds of table splitting and merging? I’d surely use such a script if it existed!

Creating a single script to handle all kinds of table split/merge operations looks like a worthwhile endeavour!

Creating TableAxe

So, I started over and built TableAxe.

TableAxe is a script that can split and join tables in InDesign, either vertically or horizontally.

There are a few interesting aspects to TableAxe.

  • No user interface to speak of. The only user interface it presents is a dialog with a message and an OK button
  • A single script handles both merging and splitting
  • Properly handle header and footer rows
  • User manual is built into the script. Script gives helpful feedback.
  • Using PluginInstaller to install/uninstall the script

More info about TableAxe: https://rorohiko.com/TableAxe

No user interface

Two reasons.

  • Developing user interfaces is expensive.

If I wanted TableAxe to have a user interface with fields, checkboxes and buttons, I’d need to either create a UXP plugin or a CEP panel. That’s perfectly feasible, but quite a bit of extra effort. Alternatively, I could use ScriptUI (a built-in feature of ExtendScript), but ScriptUI is no longer well supported by Adobe, and I don’t really want to have my script be dependent on how well ScriptUI still works.

  • A clickable user interface with fields and buttons slows the user down.

In my experience, InDesign users are a fast and ferocious lot. They love keyboard shortcuts and one-click operations. If I can avoid having a user interface, users can drive the script really, really fast. Fly through a document and split five tables in five seconds, that kind of thing.

Single Script

TableAxe is a single script. You just run it, and based on what’s currently selected in the InDesign document, it knows what to do.

It knows whether to split or merge. It knows whether it’s horizontal or vertical. It knows what tables to merge.

That means you can assign a single keyboard shortcut to the script, and that one keyboard shortcut does everything.

Fast, fast, fast and furious.

Properly Handle Header and Footer Rows

Header and footer rows are not ‘split’. They are considered to be part of the ‘table border’. So if you split a table with header and footer rows horizontally, these rows will appear in both tables.

When you merge two tables to make a taller table, TableAxe will verify that the header and footer rows match before merging.

Helpful Feedback

I like scripts that don’t leave the user guessing when something is wrong.

When the current selection cannot be handled sensibly by the script, it will provide helpful feedback to the user.

If the script runs without anything selected, it will display a user manual.

If the user attempts to split through a header or footer row, the script will tell the user that does not work.

If the user attempts to merge two tables that don’t fit together (e.g. wrong number of rows or columns, or mismatched header/footer), the script will point the user to the issue.

PluginInstaller

I am using PluginInstaller to distribute TableAxe.

A TableAxe license is US$4 per seat per year, and I’ve chosen to make the license optional. If the user does not pay for an activation, the tool will continue to work, and remain fully functional.

Before anything else, having people pay me US$4 for a license will not cover my cost of development and hosting.

The real reason for the US$4 is twofold.

In my experience, people appreciate something that they paid for more than the identical thing they got for free.

Flip side, having $4 payments surge in to my account would make me feel appreciated and might entice me to improve on the script.

In my opinion, all too often, people confuse ‘value’ and ‘cost’. People often save hundreds or thousands of dollars with a script they did not have to pay for. I hope to convince at least some end-users that they should value such ‘free/near free’ scripts by the value they bring, not by the amount they paid for it.

More about this pet peeve here:

https://coppieters.nz/the-value-paradox-in-adobe-ecosystem-development/

Sidenote: What Is PluginInstaller?

PluginInstaller is a component of the Tightener project.

https://PluginInstaller.com

At present, PluginInstaller is in ‘Minimum Viable’ state. There is still a lot of work to do, but it works. I’ve been using it for my own company for over a year now and have been generating revenue from scripts, plug-ins and extensions.

The aim is to fill a gap and create an ‘open installer/packager for all’. All kinds of add-ons, free or commercial, from all kinds of developers, also beyond the Adobe eco-system.

PluginInstaller can be used for free by indy developers. Some of the features:

Store Window. PluginInstaller comes with an optional default store window. Other developers can opt to use this store, or not use it at all, or integrate their own.

Payment Gateway. PluginInstaller comes with an optional default payment gateway for commercial scripts and donationware (currently I’ve implemented PayPal). Other developers can use this payment gateway, or they can integrate their own.

Software Licensing. PluginInstaller handles activations for commercial software, coupon codes, demo versions, donationware, nagware, allows the users to add a fee to their payment…

Source Code Protection. PluginInstaller embeds protection features similar to JSXBIN, ZXPSignCmd, packaging… in a single packaging program. It protects ExtendScript source code and allows the developer to manage demo versions, activations, time bombs…

Sell ExtendScript: PluginInstaller makes it commercially viable to sell ExtendScript solutions. Many needs can be served with simple scripts that don’t need full-fledged CEP or UXP solutions.

Currently, PluginInstaller handles InDesign ExtendScript, InDesign UXPScript, InDesign CEP panels. More to come as time and money permit.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Elegance Is Not A Goal

This is the 10th post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

Rule #10: There is no prize for elegance.

If elegance serves readableness, I’ll take it.

But I don’t strive to write elegant code. I will do it if I can, or if the problem demands it, but in run-of-the-mill code, I won’t chase elegance at the cost of clarity.

In my experience a lot of elegant code is ‘deep code’, and demands serious thought before it can be understood.

One of the most famous examples is the fast inverse square root algorithm from Quake III Arena. This algorithm computes 1/√x approximately 4 times faster than using standard floating-point operations. It uses a magic constant and bit-shifting operations that exploit the IEEE 754 floating-point representation in a mathematically elegant way. It’s nearly impossible to understand at first glance.

Such code is needed and has its place, but only within a very narrow context.

New Is Not Better

I need to frequently hop between multiple environments – ExtendScript, C++, PHP, JavaScript, TypeScript… spanning multiple generations of these programming languages.

One thing I observe is that as programming languages are modernized, they seem to gain new syntax. Languages also seem to be prone to some form of jealousy. For example, Python will add some cool tricks, and soon enough similar constructs will also show up in other languages.

These newer language features often add elegance, yet only sometimes add clarity.

I’ll look at JavaScript next, but the core ideas apply to other environments as well.

JavaScript

Features I Avoid Unless Justified

Take the arrow function => notation for functions in JavaScript. As far as I can tell the main advantage is needing less keystrokes. It makes the code elegant and denser, and readability suffers.

There are also the destructuring features and spread/rest syntax. Yes, more elegant code, but I find myself needing more time to read and understand code that uses these constructs.

Optional chaining: very elegant, but it increases the likelihood of bugs slipping by unnoticed.

Transpilers and polyfills which provide modern features in old versions of JavaScript. My suspicion is that these things come with overheads that must be accepted wholesale. Build processes get more complex.

Features I Like

Some of the code I write will be guaranteed to run in a modern JavaScript context, in which case I can and will use some of the modern JS features.

In that case, keywords like let and const are useful improvements. They do not do much for elegance, but they significantly improve reliability and reduce accidental bugs.

Other positive changes: for…of, default parameters, template literals: all of these can help make the code easier to understand.

What I Value More Than Elegance

Clarity over terseness. I don’t mind repeating myself or adding a few extra lines if it makes the logic easier to follow. To me DRY (Don’t Repeat Yourself) is not dogma – it’s a helpful rule of thumb.

Predictability over novelty. Language features that behave in subtle or surprising ways tend to age poorly. The fewer hidden rules, the better.

Debuggability over brevity. I want to be able to drop into a debugger and understand what’s going on, no extra decoding required.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Avoid Literals

This is the 9th post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

Rule #9: Try hard to avoid literals.

In nearly all code I write, I need to reference some constants: numerical constants like 3.1415926, string constants like "this" or 'that', color constants…

In the heat of the moment, it’s easy to just type in the number or the string and be done with it.

But I find that avoiding literals in my code and instead using named constants offers multiple benefits.

Disadvantages Of Literals

First, some disadvantages of using literals:

Typos Are Not Always Errors

A typo in a literal is often not an error. The code will compile and execute, but also be wrong.

Example: if I am using ExtendScript and testing whether a variable contains a string, I might write:

if ("string" == typeof v) { ... }

Sometimes, when I am editing code, my cursor is somewhere in the document and I might accidentally hit a key as I clumsily reach for my coffee, inadvertently changing this to read

if ("stxring" == typeof v) { ... }

The code is now broken, and the breakage is not obvious. I might spot this later when I commit the code to a git repository, but that would be a lucky coincidence.

Now, imagine I instead introduce a constant instead of the literal:

const TYPEOF_STRING = "string";
...
if (TYPEOF_STRING == typeof v) { ... }

If I now accidentally hit a key in the same spot, the code becomes

if (TYPxEOF_STRING == typeof v) { ... }

and my code editor will complain. And when I try to run this, an error will occur. The issue won’t go unnoticed.

Accidental Equality Spoils Find-And-Replace

Another issue is that my code often has multiple identical strings or values, some with different meanings.

For example, I might have two database tables (say, CUSTOMER and PURCHASE) that both contain a column CUSTOMER_ID.

Imagine the code is littered with literal strings "CUSTOMER_ID". I will have to carefully read the code to derive from the context which column is being referenced, the one in the CUSTOMER table or the one in the PURCHASE table.

Or, say, I might have multiple strings "green". Sometimes it is a reference to a named CSS color, sometimes it is a string that needs to be inserted into some message that needs to be displayed to the user.

I will do something like this:

const COL_NAME_CUSTOMER_ID = "CUSTOMER_ID";
const COL_NAME_PURCHASE_CUSTOMER_ID = "CUSTOMER_ID";
...
const CSS_COLOR_GREEN = "green";
const NORMAL_STATUS_NAME = "green";

By writing code that uses such named constants instead of the literals, the code becomes more self-explanatory, and there is less room for confusion.

When exploring code (my own or someone else’s) I use my text editor to do ‘global finds’ for interesting strings. Globally finding stuff is a great way to explore a large code base.

If the code is littered with hundreds of literal strings "CUSTOMER_ID" I cannot do a targeted search for only those areas in the code that access the PURCHASE table. I will also ‘catch’ all the code that accesses the CUSTOMER table.

On the other hand, if the code is using named constants, I can simply do a ‘find’ for COL_NAME_PURCHASE_CUSTOMER_ID and find only the areas of the code I am interested in.

Advantages Of Named Constants

Using named constants comes with a few advantages.

Easy To Change The Values

I might have something like:

const CUSTOMER_NAME_ERROR_COLOR = RGB(255,0,0);
const CUSTOMER_NAME_OK_COLOR = RGB(0,255,0);
…
const BUTTON_SHADE_COLOR = RGB(0,255,0);

Note that CUSTOMER_NAME_OK_COLOR and BUTTON_SHADE_COLOR have the same value, but have a different meaning.

Imagine that it turns out that this does not work well for people with red-green color blindness, and I want to change CUSTOMER_NAME_OK_COLOR to something different.

If the code is consistently using such named constants, I can easily tweak a single line of code and change the colors for better contrast.

On the other hand, if the code is littered with references to literal RGB(0,255,0) I need to use my text editor and perform a global find-and-replace.

And this becomes problematic because sometimes RGB(0,255,0) is a button shade color. I need to carefully read lines of code to make sure I am not changing a button shade color instead of a customer name color.

Easy To Read

Using named constants with carefully chosen names helps making the code more self-explanatory.

A simple example: in my own logging code, I support multiple levels of logging, from ‘NONE’ (mum’s the word) to ‘TRACE’ (crazy chatterbox).

Internally, these levels correspond to integers 0 – 4. But the code will use named constants, rather than literal values, which make the code easier to follow.

By judiciously choosing meaningful names, I can avoid having to insert comments.

const LOG_LEVEL_NONE = 0;
const LOG_LEVEL_ERROR = 1;
const LOG_LEVEL_WARN = 2;
const LOG_LEVEL_NOTE = 3;
const LOG_LEVEL_TRACE = 4;
...
function logNote(reportingFunctionArguments, message) {
    if (LOG_LEVEL >= LOG_LEVEL_NOTE) {
        if (! message) {
            message = reportingFunctionArguments;
            reportingFunctionArguments = undefined;
        }
        logMessage(reportingFunctionArguments, LOG_LEVEL_NOTE, message);
    }
}

After a coding stint, I will spend some time renaming variables, functions and constants in an effort to make the code more self-explanatory.

Most modern IDEs have built-in refactoring functionality that allow me to rename things (variables, functions, constants…) and automatically also update all references to them.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Software Development For The Adobe Ecosystem: A Value Paradox.

After 30+ years developing software, and 20+ years developing software for the Adobe ecosystem, I have some observations about the disconnect between value and compensation.

A Common Client Journey

Step 1: recognition. A creative professional realizes they’re wasting time on repetitive tasks that could be automated.

Step 2: discovery. They find a free script or plug-in that (partially) solves their problem and might manually handle the rest.

Step 3: inquiry. When they reach out about customization, they’re surprised by the cost, typically three or four zeros.

Step 4: roadblock. Without budget authority, the conversation ends or enters a challenging approval process.

The worst scenario? When approval comes but payment doesn’t, forcing developers to implement protections like time-bombing their work.

The Free Software Trap

Releasing free scripts and plug-ins can earn a developer some ‘kudos’ but also creates perception problems.

  • A near-$0 price tag is perceived as “$0 value” rather than “$1000 value at $0 cost”.
  • If the software is free, users expect unlimited free support.
  • Donationware does not work: everyone assumes someone else will donate.

The Communication Challenge

How do we, as developers, effectively convey that free doesn’t mean “without value”? Every script and plug-in represents someone’s time, expertise, and effort.

Custom development at $1000 isn’t “expensive” when it is compared to the true value or the true savings.

What we really want is a fundamental shift in how prospective customers perceive the value of automation in their workflow.

Prompt-Whack-A-Mole

Tools like MATE now use AI to generate custom scripts for non-coding users, seemingly threatening custom development.

As someone who leverages AI for coding myself , I’ve experienced the significant limitations of AI code-generation firsthand.

Beyond a certain complexity threshold, AI struggles with the comprehensive view needed for robust solutions.

Attempting to generate complex scripts becomes a game of “prompt-whack-a-mole”. Fix one issue, another pops up elsewhere.

AI excels at specific, contained tasks but falls short when integrating multiple components or handling edge cases that an experienced developer will anticipate.

This reinforces rather than diminishes the value of experienced developers. We’re not just code writers. We’re architects who understand the entire ecosystem and can design solutions that stand the test of time and use.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected] . We create custom scripts, plugins and plug-ins, large and small, to speed up and take the dread out of repetitive tasks.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Don’t Obfuscate

This is the eighth post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

Rule #8: Don’t obfuscate your code.

When I am working on someone’s existing code, I often come across obfuscated code.

Two Types Of Obfuscation

Protection Against Reverse Engineering

I am not worried about this type of obfuscation; it’s like a ‘light-weight’ compilation of source code into an executable.

Purposeful obfuscation and condensation is a useful tool in some programming environments. It can add a layer of protection against snooping of source code.

The source code for compiled languages is converted into some machine code. An indirect side effect of compiling source code is to make it somewhat harder to reverse engineer than an interpreted language like JavaScript.

JavaScript is normally deployed as source code. Obfuscating and condensing (‘uglifying’) the source code offers a thin veneer of extra protection against reverse engineering.

Optimize For Keystrokes?

The second form of obfuscation is less beneficial.

What it boils down to is that the source code contains ‘lumps’ of code, often on a loooong single line, often with very little white space, often with very short variable names that are void of meaning.

I understand why it happens: as a software developer, I can get ‘into the zone’. When that happens, there is an urge to express thoughts and ideas as efficiently as possible, before I lose my train of thought and before I need to get onto that Zoom call at 3:00pm.

Obscure gobs of code can also be a consequence of using powerful but obscure language constructs. It’s cool and amazing and all that, but I know that, six months later, I’ll lose an hour trying to re-acquaint myself with the code.

JavaScript example:

const statusMsg=age>=18?(income>50000 ? (hasDebt ? "Eligible with review" : "Fully eligible"): (hasReferences ? "Eligible with guarantor": "Not eligible")): "Too young";

C++ example using template metaprogramming to calculate a factorial at compile time:

// Dense version - Compile-time factorial calculation
template<int N> struct factorial {
    enum { ret = factorial<N-1>::ret * N };
};
template<> struct factorial<0> {
    enum { ret = 1 };
};

int main() {
    cout << "7! = " << factorial<7>::ret << endl;
    return 0;
}

Fluff it up!

The thing is: after a good coding stint, I will always go through the code and ‘let the light in’.

I will look for ‘dense’ bits of code, and untangle them.

Split it into multiple lines, add lots of white space, visually structure the expression logic, give variables a better name, add intermediate variables with sensible names…

I’ll also refrain from using language features whose only benefit is terseness of the source code. Unless there are clear benefits to a ‘terse’ or ‘cool’ language feature, I’ll avoid it.

If I use an advanced language feature, I’ll document while my understanding is fresh. My approach is to add a document called Refresher.md into the source code where I document things that I know I’ll have forgotten in a few months time: how to build, what can be found where, what is the necessary context…

Wait A Day

It’s best to leave a little bit of time, but not too much, after the code is written; a day or so works well for me. When I re-read my own code a day later, I’ll more easily notice parts of the code that are unclear or unnecessarily dense.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected].

We provide training, co-development, mentoring for developers working around Adobe’s Creative Cloud and InDesign Server.

We can also run workshops – have a look at Workshop: Mastering Automation for Real-World Adobe Workflows.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan

Workshop: Become An Adobe Automation Ninja

Do you need to automate Adobe software, and don’t know how to get started?
Do you maintain legacy Adobe automation code?
Stuck with existing InDesign, Illustrator, or Photoshop scripts or plug-ins that have grown messy over time?
Trying to fix AI-generated code that doesn’t work right?

You’re not alone.

Many teams depend on custom scripts and plugins to speed up publishing workflows.

These projects often become hard to maintain as they get handed over between developers.

I’m planning a workshop aimed at people who automate Adobe products.

Being well-prepared will save days or weeks of onboarding and debugging time.

Interested? I’ll be running a one-day version of this workshop at the Creative Developers Summit on June 4, in Phoenix, AZ:

https://creativeproweek.com/phoenix-2025/creative-developers-summit/

Who Can Benefit?

  • In-house developers working on automation for Adobe Creative Cloud and InDesign Server
  • Experienced developers who need a head-start automating Adobe apps
  • Designers who need to modify existing scripts
  • Technical leads who want better development workflows

What You’ll Learn

  • Adobe’s automation models: How automation really works in InDesign, Illustrator, Photoshop, Bridge…
  • Legacy code handling: How to understand, document, refresh and update old code
  • Better coding habits: How to write code that’s robust and maintainable
  • Real examples: We’ll look at actual scripts and source code to spot common problems and fix them
  • Hands-on practice: Exercises to build your skills
  • Debugging methods: Find and fix bugs faster
  • Source Code Control and Issue Tracking

Why This Workshop Is Different

This workshop comes from years of hands-on work. It focuses on practical solutions for real problems that come up when automating in busy environments.

For in-house teams, properly training your developers will cut down learning time and prevent wasted effort.

Interested?

I’m also taking registrations of interest for custom workshops that fit specific needs.

If this sounds useful for you or your team, email [email protected]. Your input helps shape the workshop.

The workshop can happen remotely or at your location, whatever works best for you.

Need Custom Automation?

Alternatively, if you want to automate part of your Creative Cloud workflow, contact [email protected].

We create custom scripts and plugins, both simple and complex, to speed up repetitive tasks.

My LinkedIn account is: https://www.linkedin.com/in/kristiaan

Exceptional!

This is the seventh post in my series: Coding Without the Jargon, where I show how I clean up and gradually improve real-world code.

The following post is an opinion piece.

Rule #7: exceptions are exceptional.

There are multiple approaches to use and handle exceptions, which might or might not be better than mine.

Below, I’ll elaborate on how I personally use exceptions and how that approach works for me.

Exceptions

Long time ago, when I first got my feet wet with languages like FORTRAN IV, ANSI Standard BASIC, C, COBOL, there were no exceptions.

Dividing by zero typically sent a core dump to the mainframe’s chain printer, and caused a program termination.

Once exceptions were introduced into various programming languages, I preferred avoiding them, and even today, I still avoid them.

Nowadays I see a lot of code which uses exceptions and exception handling as part of the normal, day-to-day flow of the code.

I also see exceptions being used to set up some ‘out of band’ communication channel.

Those are approaches I will avoid. For me, exceptions are indications of a bug, and should not be normalized. Talking about ‘normal’ exceptions makes me think of The Boy Who Cried Wolf.

Why I Dislike Exceptions

Giving Up Control

To me, throwing an exception is a bit similar to throwing a tantrum. Throwing your hands in the air and screaming ‘HERE – SOMEONE ELSE DEAL WITH THIS! I AM DONE!’, then stomping off.

Throwing an exception can lead to loss of control.

I will know exactly when the code throws, but I might not know who or what will catch the exception and what they’ll do. Will they crash the program? Or will they continue processing? Throwing feels like “Après moi, le déluge“.

I prefer to stay in control in my code as much as possible.

My strategy for that is two-fold:
– I will test pre-conditions and try really hard to avoid any exceptions being thrown. E.g. instead of dividing by zero, I’ll explicitly test the divisor and will handle the zero before trying to divide by it (logging, returning an error…)
– If, despite my best efforts, my pre-condition-tests turn out to be incomplete or unable to avoid the exception being thrown, I’ll make sure to report the exception into the logging system, and then handle it as gracefully as possible.

If I can, I will catch any unavoidable exception as close to its origin as possible.

Catching closer to the code that caused the problem gives me a better the chance to know what went wrong. I might be able to decide whether it’s OK to soldier on, or whether it’s better to ‘log-and-terminate’ or ‘log-and-return’.

A lot depends on external context. The function I am writing might be the caller or might be the callee and I might not have control over the expectations of the code outside of my function. In those cases, I’ll act appropriately, as required. E.g. I will re-throw if that’s what expected. I’ll catch and log any exceptions that were thrown by the external code I am calling and absorb them if that is needed.

An example: I will often use exceptions to safely handle bad JSON input for JSON.parse() in JavaScript. There is no simple pre-condition I can use to verify if a string is valid JSON or not. Instead, I will catch and log any throw coming out of JSON.parse() immediately, on the next line. Such throw will normally only occur if some buggy upstream code is generating invalid JSON, which is worth tracking down and fixing.

Debugging

A second reason for my dislike is ease of debugging. Many debuggers have an option to ‘break on any caught or uncaught exception’.

This debugger feature would be nearly useless to me if my code were written to routinely throw and catch exceptions as part of the normal flow. There would be many tens, if not hundreds of uninteresting debug breaks during a session, whenever the code flow hits a ‘normal exception’.

Instead, in my code there are almost no ‘normal exceptions’.

My code rarely throws, so I can turn this debug option on. If and when it breaks into the debugger, the ‘break’ is exceptional and worth investigating.

Out-of-Band Communication

If all the surrounding code is mine, I will avoid using exceptions as a mechanism for ‘normal’ non-error control flow, and I won’t normally re-throw.

I find that exception-based data passing makes the code harder to understand.

Instead I will set up (and document) some mechanism that uses either a standard return, or maybe some call-by-reference variable, to report any failures of pre-conditions.

I might return an object with an optional ‘error’ attribute to report back failure, or I might return some ‘special’ value – e.g. undefined or null instead of a string or a number.

Performance

I don’t know how exceptions are implemented in all programming languages, but at least in C/C++, handling a throw is costly in terms of CPU load. There are languages that don’t have an as significant performance penalty for throwing exceptions.

For most C/C++ compilers, adding exception handling code remains at close to zero CPU cycle-cost as long as the code does not throw.

If and when the code throws, handling the exception costs a fair bit of cycles.

In other languages this can be (much) less of a factor, but it is easier to stick to ‘exception avoidance’ which helps me port code and algorithms back and forth between languages.

Scaffolding

Most of my code looks akin to the following scaffold in a fictitious programming language:

func someName(parameters) {

  let retVal = DEFAULT_RET_VAL;

  logEntry();

  do { // Condition Ladder
  
    try { // Catch the unexpected!

      // Rung 1
      if (SOMETHING_IS_WRONG(parameters)) {
        logError("SOMETHING IS WRONG");
        break;            
      }

      // Rung 2
      if (SOMETHING_ELSE_IS_WRONG) {
        logError("SOMETHING ELSE IS WRONG");
        break;            
      }

      // Rung 3
      if (SOME_EDGE_CASE) {
        retVal = RET_VALUE_FOR_EDGE_CASE;
        break;            
      }

      ...
 
      retVal = SOME_CALCULATION;

    }
    catch (err) {
      // This only occurs for some condition I 
      // forgot or overlooked. I need to add
      // some more rungs to the condition ladder
      logError("Drats, an exception " + err);
      retVal = SOME_ERROR_CONDITION_VALUE;
    }
  }
  while (false); // End Condition Ladder

  logExit();

  return retVal;
}

The actual ‘shape’ depends on the programming language and its syntax. For example, Xojo has no do-break-while(false) but has DO-EXIT-LOOP UNTIL True.

The various log...() calls would also have a mechanism to determine and report the name of the function they’re in.

Next

If you’re interested in automating part of a Creative Cloud-based workflow, please reach out to [email protected] . We create custom scripts, plugins and plug-ins, large and small, to speed up and take the dread out of repetitive tasks.

If you find this post to be helpful, make sure to give me a positive reaction on LinkedIn! I don’t use any other social media platforms. My LinkedIn account is here:

https://www.linkedin.com/in/kristiaan