One of the pain points for new
Tcl users is the lack of an
obvious package repository and management system. Python has
pypi and
pip. Tcl has
Teapot and
Teacup, but these were more
popular when they were maintained by
ActiveState. If you just want to help users (or yourself) install a few Tcl packages hosted on Github, the Tin package manager is worth checking out.
I developed testin as a way to get to know Tin. Once you manually install Tin, installing other packages is as easy as
$ sudo tclsh
% package require tin
% tin add -auto testin https://github.com/johnpeck/testin install.tcl
% tin install testin
...on linux (since the default package directory is owned by root).
If you're using Github for development anyway, Tin doesn't make you
understand a new package repository.
The Testin repository has detailed instructions for installing Tin,
Tcllib, and finally Testin.
A nugget: named arguments for Tcl procs in Testin
Testin has one function: intlist. It simply makes a list of integers, similar to Python's range. But it experiments with named arguments:
Nifty, right? The intlist function takes two arguments, but cmdline allows omitted arguments to have default values (in this case, the default first integer is 0). And named arguments make procedure calls easier to understand. This idea came from the Tcler's Wiki.
I've been trying to be better about throwing errors in my Tcl code for tcladu. There seem to be at least three ways to do this: error, throw, and return. Is there any difference? Here's what I tried:
# Demonstrate handling errors with Tcl
proc iterint {start points} {
# Return a list of increasing integers starting with start with
# length points
set count 0
set intlist [list]
while {$count < $points} {
lappend intlist [expr $start + $count]
incr count
}
return $intlist
}
proc pdict {dict {pattern *}} {
set longest 0
dict for {key -} $dict {
if {[string match $pattern $key]} {
set longest [expr {max($longest, [string length $key])}]
}
}
dict for {key value} [dict filter $dict key $pattern] {
puts [format "%-${longest}s = %s" $key $value]
}
}
proc throw_error { code } {
switch $code {
1 {
set message "This error was thrown with error"
set errorcode "FIRST_ERROR_CODE"
error $message -errorcode $errorcode
}
2 {
set message "This error was thrown with throw"
set errorcode "SECOND_ERROR_CODE"
throw $errorcode $message
}
3 {
set message "This error was thrown with return -code error"
set errorcode "THIRD_ERROR_CODE"
return -code error -errorcode $errorcode $message
}
}
}
foreach code [iterint 1 3] {
puts "-----------------------------------"
try {
throw_error $code
} trap {FIRST_ERROR_CODE} {message optdict} {
puts "Error handler for FIRST_ERROR_CODE"
puts "Message is: $message"
puts "Context:"
pdict $optdict
} trap {SECOND_ERROR_CODE} {message optdict} {
puts "Error handler for SECOND_ERROR_CODE"
puts "Message is: $message"
puts "Context:"
pdict $optdict
} trap {} {message optdict} {
puts "Catch-all error handler"
puts "Message is: $message"
puts "Context:"
pdict $optdict
}
}
puts "-----------------------------------"
...and here's the output:
-----------------------------------
Error handler for FIRST_ERROR_CODE
Message is: This error was thrown with error
Context:
-errorinfo = -errorcode
(procedure "throw_error" line 1)
invoked from within
"throw_error $code"
("try" body line 2)
-errorcode = FIRST_ERROR_CODE
-code = 1
-level = 0
-errorstack = INNER {invokeStk1 throw_error 1}
-errorline = 2
-----------------------------------
Error handler for SECOND_ERROR_CODE
Message is: This error was thrown with throw
Context:
-errorcode = SECOND_ERROR_CODE
-code = 1
-level = 0
-errorstack = INNER {returnImm {This error was thrown with throw} {-errorcode SECOND_ERROR_CODE}} CALL {throw_error 2}
-errorinfo = This error was thrown with throw
while executing
"throw $errorcode $message"
(procedure "throw_error" line 11)
invoked from within
"throw_error $code"
("try" body line 2)
-errorline = 2
-----------------------------------
Catch-all error handler
Message is: This error was thrown with return -code error
Context:
-errorcode = THIRD_ERROR_CODE
-code = 1
-level = 0
-errorstack = INNER {invokeStk1 throw_error 3}
-errorinfo = This error was thrown with return -code error
while executing
"throw_error $code"
("try" body line 2)
-errorline = 2
-----------------------------------
The differences are in the error options dictionary captured by the try command. The errorstack key has a much larger value with the throw command. The various commands otherwise do the same thing.
The first thing I realized when I started to use
tcladu was the need for
convenience functions. For example, combinations of write_device
and read_device should be combined into a query command for
convenience. And it would be nice for those kind of commands to be in
the same tcladu package. So I started to write some Tcl, and I
realized I didn't really know how to manage the package version with
this high-level addition.
The tcladu.so binary already provides the tcladu namespace and
package version. I set this version in a makefile that also drives
testing, so I'd really like the makefile/binary package version to be
the source of truth. So I need to extract that package version in the
convenience code. It turns out that there's a
package command
for this. These links are to Tcl 9.0 documentation, but my usage doesn't
care.
Adding these lines
load ./tcladu.so
set version [package present tcladu]
package provide tcladu $version
...to the tcladu.tcl source file synchronizes the Tcl and binary
source versions. I can then package this up with
pkg_mkIndex -verbose . tcladu.so tcladu.tcl
...using the pkg_mkIndex command. This produces the pkgIndex.tcl script distributed with the package source files:
...where you can see the combination of sourcing and loading done to make the package available. These convenience functions are coming in tcladu version 1.1.0.
Removed references to Tcladu at Sourceforge and Gitlab.
Demonstration
Let's say you've downloaded a release binary from
Github, and you have a few (two)
ADU100s connected. You also need permissions to access the device,
but let's say you have those.
The package is just two files: pkgIndex.tcl, used by Tcl's
package procedure, and
tcladu.so, a binary produced from some c code.
Appended the package to Tcl's auto_path
The auto_path list tells Tcl where to look for packages.
Required the package
This both loads procedures into the tcladu namespace and initializes libusb.
Populated the connected device database
The discovered_devices command will populate a device database with
things like device handles and serial numbers. This must be called
before writing to or reading from devices.
Queried the device database for device 0
The serial_number command doesn't do anything with connected
hardware -- it just returns a serial number populated by
discovered_devices.
Sent the command to set/close the ADU100's relay
The write_device command takes a device index instead of some kind
of handle to identify the targeted device. It then takes an ASCII
command that you can find in the ADU100
manual to manipulate the
hardware relay. The last argument is a timeout for libusb (in
milliseconds), which will become more interesting when we get into
reading from the hardware.
Sent the command to read the relay status
Reading the relay status starts with telling the ADU100 to read the
status. It will prepare the result to be read by the next libusb
read. The return value for the RPK0 command is just a success code
-- not the relay status.
Read from the ADU100
The read_device command takes a device index, followed by the number
of bytes we want to read. This payload size is a placeholder for now,
although it has to be 8 bytes or larger. I want to keep it to handle
larger payloads on other Ontrak devices this might support in the
future.
The final argument is the familiar ms timeout. Libusb will throw a
timeout error if the read takes longer than this value. But this
error isn't fatal, and your code can catch this and simply try again.
This gives your application a chance to stay active while you wait for
a long hardware read.
The result is a Tcl list
containing the success code and return value. In this case, a 1
shows us that the relay is set/closed.
Sent the command to reset/open the ADU100's relay
This is the opposite of the set command.
Sent the command to check the relay status again
We'll now expect the hardware to report 0 for the relay status.
Read from the ADU100
The returned list is now 0 0, telling us that the command succeeded
and that the relay is reset/open.
Xfig's depth settings can create some headaches when you use figure libraries. Library figures can use depths that conflict with your current drawing. Say I have this drawing of the Teensy Audio Board and a red circle,
...and then I move the red circle over the board:
This is clearly not what I want. The circle has a depth of 50, and
the board has depths between 10 and 60. The circle ends up on top of
some objects and not others. If you were using PowerPoint, you would
just select Bring to Front or Send to Back to arrange the objects.
But Xfig requires setting the depth of each member object
individually, which you should really do with a script like
figdepth.
Using figdepth
Figdepth has some options, the most important of which is the minimum depth argument
...which sets the depth of the topmost object in a fig file. The
script simply increments all depths by some number to make this
minimum depth to be your setting. You will, of course, need to have
everything you want to modify in its own fig file. You can then run figdepth like
...to write your new depth-adjusted figure. Use Xfig's
merge command to
bring the new figure into your old drawing. My new drawing looks like this:
...with all objects in the Teensy Audio Board at a deeper depth than
the red circle.
How does it work?
The Fig file format is
really simple, which is one reason why I like Xfig so much. You can
figure out what kind of object each line describes by looking at the
first character. You then figure out which number in the line is the
object's depth and increment it. The switch statement below shows how
one line is processed.
proc increase_object_depth { xfig_line depth_increase} {
# Return a new xfig line with the depth increased
#
# Arguments:
# xfig_line -- Single line from an xfig file
# depth_increase -- Amount to increase the depth
set entry_list [split $xfig_line]
set first_character [lindex $entry_list 0]
switch $first_character {
"0" {
# Color object -- nothing to do
return $xfig_line
}
"1" {
# Ellipse
set depth_index 6
set old_depth [get_object_depth $xfig_line]
set new_depth [expr $old_depth + $depth_increase]
set new_entry_list [lset entry_list $depth_index $new_depth]
set new_line [join $new_entry_list]
return $new_line
}
"2" {
# Polyline (also imported picture bounding boxes)
set depth_index 6
set old_depth [get_object_depth $xfig_line]
set new_depth [expr $old_depth + $depth_increase]
set new_entry_list [lset entry_list $depth_index $new_depth]
set new_line [join $new_entry_list]
return $new_line
}
"3" {
# Spline
set depth_index 6
set old_depth [get_object_depth $xfig_line]
set new_depth [expr $old_depth + $depth_increase]
set new_entry_list [lset entry_list $depth_index $new_depth]
set new_line [join $new_entry_list]
return $new_line
}
"4" {
# Text
set depth_index 3
set old_depth [get_object_depth $xfig_line]
set new_depth [expr $old_depth + $depth_increase]
set new_entry_list [lset entry_list $depth_index $new_depth]
set new_line [join $new_entry_list]
return $new_line
}
"5" {
# Arc
set depth_index 6
set old_depth [get_object_depth $xfig_line]
set new_depth [expr $old_depth + $depth_increase]
set new_entry_list [lset entry_list $depth_index $new_depth]
set new_line [join $new_entry_list]
return $new_line
}
"6" {
# Compound -- nothing to do
return $xfig_line
}
default {
# Some non-object -- nothing to do
return $xfig_line
}
}
}
Having trouble with cmdline?
Figdepth
uses the
cmdline
package from
tcllib
to handle command line arguments. I usually run scripts out of Eshell in Emacs, so I have this in my .emacs:
...to set the TCLLIBPATH environment variable. This tells Tcl where to find Tcllib's packages.
I've been interested in
fieldbusses lately, and
MODBUS in particular is used
by some pretty cheap hardware modules. It also offers "driverless"
communication — you don't need a .dll or .so file to talk to your
hardware. The hardware I've been experiementing with is shown below:
...and can be bought from eBay for around 50 USD. I found some very
useful code on
the Tcl wiki
and was able to write and read from registers with a USB/RS-485
adaptor. The RTU-307C user's manual
implies that you can change the RS-485 baud from 9600, but I wasn't
able to figure out how that works.
After some more help from
The Tcler's Wiki, I put
together a GUI to demonstrate MODBUS communication with the RTU-307C
module. The animation below shows the GUI next to an oscilloscope
(yelloscope) display.
The wiki helped me get the mouse wheel to move those sliders.
I have a few devices from Syscomp Electronic
Design that I use for experimenting
and debugging. The CGR-201, shown below, is one of these.
Syscomp sadly lost one of its founders, Peter Hiscocks, and won't be
releasing any new products. I forked their CircuitGear software (screenshot below) to
make some of the changes I wanted, and to start packaging the software
as single-file executables for Linux. My fork is here:
in the src directory. But you'll need Tcl, Tk, and some helper packages.
The single-file executable release contains all of these.
Once you download the executable, you'll have to give yourself permission to run it with something like
chmod a+x yelloscope-1.0.0-linux-x86_64
and your user will have to be in the right group for the software to access the hardware.
For me, on Ubuntu, this group is dialout. I added myself to the group with
sudo adduser john dialout
...and then I had to reboot.
A note on making Starpacks
Starkits and Starpacks are
ways of packaging Tcl programs to reduce or eliminate dependencies.
The makefile in the Yelloscope repository shows how I automate
creating Linux starpacks from Tcl sources. It's customized for my
environment though, and you'll have to change some paths to make it
work. I'll be happy to help if you reach out.
My AVR-based Arduino workflow is automated by GNU makefiles, which
call avrdude to interact with the
hardware. It's nice to also have the makefile find the Virtual COM
Port (VCP) assigned to the hardware programmer or USB/UART device.
Findcoms is a Tcl script
that finds the device and writes the VCP name to a file. You can then
access this file from your makefile to get the name to avrdude.
From the command line
Findcoms uses a -d parameter to decide which device to detect. Use
-d? to get a list of available d parameters.
I've been trying to figure out a good way to plot some sensor data as it comes in, and I needed a way to simulate that data. I ultimately wanted to get timestamped data into gnuplot, and I knew gnuplot would take time data formatted as Unix Timestamps. I thought, maybe I can use Unix Timestamps with millisecond precision and avoid all the Day/Month/Year formatting. I found out it wasn't going to be that easy.
Gnuplot can accept floating point seconds, but only with the right formatting
Gnuplot won't accept Unix Timestamps with millisecond precision -- the number has to come in as an integer. But it will accept times formatted as %H:%M:%S with floating point seconds. So then I needed to create these millisecond precision stamps. Tcl will give you the milliseconds since 1970, but its clock format command will neither consume this number directly, nor will it consume this number in floating point seconds. The fix, as pointed out by Donal Fellows, is to bolt the seconds and milliseconds together with a format string.
Fakestream creates these stamps to simulate real-time data
I wrote fakestream to continuously write data stamped with these millisecond-precision stamps to a file. The plot below shows gnuplot periodically scanning this file and updating a plot.
The fakestream repository has the Tcl source for this script as well as single file executables for easy distribution. These Starkits and Starpacks are what keep me coming back to Tcl.
I do electronic design for a living, and the CAD package I use
determines the operating system I spend most of my time in. This
means Windows during the day, and Linux at home. Using
Emacs on both platforms gives
me a consistent look and feel. And
Eshell
is a very nice command shell for Windows.
I also need Tcl on both platforms, and
Emacs gives me a nice way to set the
TCLLIBPATH environment
variable. This tells Tcl where to look for packages you bring in with
package require. The TCLLIBPATH variable is a list of paths, and
Tcl needs to be able to make a list variable out of whatever it finds
in TCLLIBPATH. A space-separated list works very nicely. I have this in my .emacs:
;; Get extra emacs packages -- including some useful
;; string-manipulating functions like string-join
(require 'subr-x)
;; Add my local package directory to auto_path. This will be a Tcl
;; list, so the entries should be separated by spaces.
;;
;; Get tklib from https://github.com/tcltk/tklib
(setenv "TCLLIBPATH"
(string-join '("c:/Tcl/lib/local"
"c:/Tcl/lib/local/tklib/modules")
" "))
Why do I need to (require 'subr-x)? The string-join command is
supposed to be part of Emacs 25, but my Emacs 25.2.2 still chokes on
it at startup without the subr-x package. After evaluating .emacs with M-x eval-buffer:
...you can see that the TCLLIBPATH variable is set, and that Tcl uses it in its auto_path list. I originally installed Tcl using ActiveState's installer, which makes its own changes to auto_path.
I have the very interesting Gub package saved here:
and I can now require it with
...to show that auto_path allowed Tcl to find the local package.