Category Archives: logtalk

Lambda expressions in Logtalk

Logtalk 2.38.0, released earlier this month, adds support for lambda expressions. A simple example of a lambda expression is:

| ?- meta::map([X,Y]>>(Y is 2*X), [1,2,3], Ys).
Ys = [2,4,6]

In this example, a lambda expression, [X,Y]>>(Y is 2*X), is used as an argument to the map/3 list mapping predicate, defined in the library object meta, in order to double the elements of a list of integers. Using a lambda expression avoids writing an auxiliary predicate for the sole purpose of doubling the list elements. The lambda parameters are represented by the list [X,Y], which is connected to the lambda goal, (Y is 2*X), by the (>>)/2 operator.

Currying is supported. I.e. it is possible to write a lambda expression whose goal is another lambda expression. The above example can be rewritten as:

| ?- meta::map([X]>>([Y]>>(Y is 2*X)), [1,2,3], Ys).
Ys = [2,4,6]

Lambda expressions may also contain lambda free variables. I.e. variables that are global to the lambda expression. For example, using GNU Prolog as the back-end compiler, we can write:

| ?- meta::map({Z}/[X,Y]>>(Z#=X+Y), [1,2,3], Zs).
Z = _#22(3..268435455)
Zs = [_#3(2..268435454),_#66(1..268435453),_#110(0..268435452)]

Logtalk uses the ISO Prolog construct {}/1 for representing the lambda free variables as this representation is often associated with set representation. Note that the order of the free variables is of no consequence (on the other hand, a list is used for the lambda parameters as their order does matter).

Both lambda free variables and lambda parameters can be any Prolog term. Consider the following example by Markus Triska:

| ?- meta::map([A-B,B-A]>>true, [1-a,2-b,3-c], Zs).
Zs = [a-1,b-2,c-3]

Lambda expressions can be used, as expected, in non-deterministic queries as in the following example using SWI-Prolog as the back-end compiler and Markus Triska’s CLP(FD) library:

| ?- meta::map({Z}/[X,Y]>>(clpfd:(Z#=X+Y)), Xs, Ys).
Xs = [],
Ys = [] ;
Xs = [_G1369],
Ys = [_G1378],
_G1369+_G1378#=Z ;
Xs = [_G1579, _G1582],
Ys = [_G1591, _G1594],
_G1579+_G1591#=Z ;
Xs = [_G1789, _G1792, _G1795],
Ys = [_G1804, _G1807, _G1810],
_G1789+_G1804#=Z ;

As illustrated by the above examples, Logtalk lambda expression syntax reuses the ISO Prolog construct {}/1 and the standard operators (/)/2 and (>>)/2, thus avoiding defining new operators, which is always tricky for a portable system such as Logtalk. The operator (>>)/2 was chosen as it suggests an arrow, similar to the syntax used in other languages such as OCaml and Haskell to connect lambda parameters with lambda functions. This syntax was also chosen in order to simplify parsing, error checking, and, eventually, compilation of lambda expressions. The specification of the Logtalk lambda expression syntax can be found in the Logtalk reference manual. The current Logtalk version also includes an example, lambdas, of using lambda expressions with a fair number of sample queries.

Although the first, experimental implementation of lambda expressions in Logtalk followed Ulrich Neumerkel’s proposal for lambda expression syntax in Prolog, that representation was dropped and replaced by the current one in order to avoid some of the questions raised by the Ulrich’s proposed syntax. In his proposal, lambda expressions start with the (\)/1 prefix operator, which can be seen as an approximation to the greek letter lambda (λ). However, the backslash character is usually associated in Prolog with negation as found e.g. in the standard operators (\+)/1, (\==)/2, (\=)/2, and (=\=)/2. Another issue with this operator is that users must be careful when the first lambda parameter is enclosed in parentheses. Consider the following example by Markus (using SWI-Prolog with Ulrich’s lambda library):

| ?- maplist(\(A-B)^(B-A)^true, [1-a,2-b,3-c], Zs).

The goal fails because there is a missing space between the (\)/1 prefix operator and the opening parenthesis that follows. A likely trap for beginners. Ulrich’s syntax for lambda free variables requires adding a new infix operator, (+\)/1, to the base language, something that I prefer to avoid. Not to mention that this operator is too similar to the Prolog negation operator, (\+)/1. Parsing lambda parameters also needs to be careful to avoid calling a non-existing (^)/2 predicate when the lambda expression is misformed. Parsing lambda parameters is arguably simpler in Logtalk due to the use of a list plus using the (>>)/2 operator to connect the parameters with the lambda goal.

The Logtalk implementation of lambda expressions is still evolving. The current development version features improved error-checking and adds support for using a (>>)/2 lambda expression as a goal (besides as a meta-predicate closure). No optimizations are yet in-place. Thus, be aware of possible performance issues if you plan to use lambda expressions heavily in your applications. But don’t let that stop you from having fun playing with Lambda expressions in Logtalk. As always, your feedback is appreciated. Thanks to Ulrich Neumerkel, Richard O’Keefe, and Markus Triska for their lambda expression examples.

Working with data sets

A recurring question on the comp.lang.prolog newsgroup is how to work with different data sets, usually loading them from different files, without mixing the data in the plain Prolog database. Unfortunately, these questions often lack enough details for making an informed choice between several potential programming solutions. Two possible solutions are (1) load the data into suitable data structures instead of using the database and (2) use clauses to represent the data but encapsulate each data set in its own Prolog module or Logtalk object. Some combination of both solutions may also be possible. In this post, however, we’re going to sketch the second solution using Logtalk objects. For an alternative but also Logtalk-based solution please see this previous post.

Assuming all data sets are described using the same predicates, the first step is to declare these predicates. The predicate declarations can be encapsulated either in an object or in a protocol (interface). Using a protocol we could write:

:- protocol(data_set).
    :- public(datum_1/3).  % data set description predicates
    :- public(datum_2/5).
:- end_protocol.

We can now represent each data set using its own object (possibly stored in its own file). Each data set object implements the data_set protocol defined above. For example:

:- object(data_set_1,
    datum_1(a, b, c).
    datum_2(1, 2, 3, 4, 5).
:- end_object.

Assuming we have the required memory, we can load some of all of our data sets without mixing their data. But that’s not all. We can also encapsulated our data set processing code in its own object (or set of objects, or hierarchy of objects, or whatever is suitable to the complexity of our application). This object, let’s name it processor, will perform its magic by sending messages to the specific data set that we want to process. For example:

:- object(processor).
    :- public(compute/2).
    compute(DataSet, Computation) :-
        DataSet::datum_1(A, B, C),
:- end_object.

If the computations we wish to perform make sense as questions sent to the data sets themselves, an alternative is to move the data set predicate declarations from the data_set protocol to the processor object and make the data set objects extend the resulting object, below renamed as data_set. For example:

:- object(data_set).
    :- public(datum_1/3).  % data set description predicates
    :- public(datum_2/5).
    :- public(compute/1).  % computing predicates
    datum_3(abc, def).     % default value for datum_3/2
    compute(Computation) :-
        ::datum_1(A, B, C),
:- end_object.
:- object(data_set_1,
    datum_1(a, b, c).
    datum_2(1, 2, 3, 4, 5).
:- end_object.

An advantage of this solution is that the object data_set can contain default values for the data set description predicates. The ::/1 operator used above is the Logtalk operator for sending a message to self, i.e. to the data set object that received the message compute/1. If the information requested is not found in the data set object, then it will be looked for in its ancestor, where the default values are defined.

The best and most elegant solution will, of course, depend on the details on the data set processing application. For example, above we could have defined the object data_set as a class and the individual data sets as instances of this class (technically, the solution above uses prototypes).

Note that all code above is static. Individual data set description predicates may be declared dynamic (using the predicate directive dynamic/1) if we need to update them during processing of the data sets. If our application requires being able to delete data sets from memory, is simply a question of declaring the data set objects dynamic using the Logtalk object directive dynamic/0 and to use the Logtalk built-in predicate abolish_object/1 when a data set object is no longer needed.

We have only scratched the surface of the Logtalk features that we could make use in our implementation but, hopefully, it’s enough as a starting guide. Feel free to stop by the Logtalk discussion forums to further discuss this programming pattern.

Switching between Logtalk installed versions

Recent Logtalk releases include a shell script, logtalk_select, which allows easy switching between installed Logtalk versions. It’s an experimental script, loosely based on the python_select script, with two major limitations: it doesn’t update the Logtalk user folder and it’s POSIX-only. Nevertheless, it’s useful whenever you want to test your application with a new Logtalk release. Usage is quite simple. In order to list all installed versions type:

$ logtalk_select -l
Available versions: lgt2372 lgt2373 lgt2374 lgt2375

The current installed version can be checked by typing:

pmmbp:~ pmoura$ logtalk_select -s
Current version: lgt2375

In order to switch to another installed version type:

$ sudo logtalk_select lgt2374

Using sudo may or may not be needed depending on your Logtalk installation prefix and on the administrative privileges of your user account. Typing the script name without arguments prints a help script:

$ logtalk_select
This script allows switching between installed Logtalk versions
logtalk_select [-vlsh] version
Optional arguments:
-v print version of logtalk_select
-l list available versions
-s show the currently selected version
-h help

If you’re a shell scripting wizard and able to improve the logtalk_select script, please mail me. As always, feedback and contributions are most welcome.

ICLP 2009 invited talk on Logtalk

The slides I used on my ICLP 2009 invited talk on Logtalk are available at:

My thanks to Patricia Hill and David S. Warren for the invitation. A special thanks for all of you that attended the invited talk. Hope you enjoyed it. I had a great time at Pasadena, meeting old friends and making new ones. The only puzzling thing was people complains about the temperature in the conference rooms, which I found quite comfortable. But that was just me ;-) There are some interesting news from the ISO Prolog standardization meeting but that’s a topic for another post.

Logtalk 2.37.2 released

I released Logtalk 2.37.2 a couple of days ago. This is the third incremental release of the 2.37.x version. A total of 37 major releases, 122 when including both major and minor releases, since I started the Logtalk project in January of 1998 (the number 2 means this is the second generation of the Logtalk language; generation 3 is coming in a not so distant future). Release 2.37.3 is already being developed. This is your typical example of the open source mantra release often, release early.

The wide Prolog compatibility goals of Logtalk means that there is really no end in sight for Logtalk development. Sometimes this scares me. At other times, it pushes me out of bed. It also means that an wealthy number of Prolog compilers are alive and kicking, continuously being improved. Many of the changes in this and past releases are Prolog compatibility updates. Lack of strong standardization only complicates matters.

This release takes Logtalk support for compiling Prolog modules to a new level. This support is important for a number of reasons. First, it shows that Logtalk is not only a superset of plain Prolog but also of Prolog plus modules. Within reasonable terms, of course, giving that Prolog module dialects often remind me of the Babel tower tale. Second, compiling a module as a Logtalk object helps to identify potential problems when porting Prolog code to Logtalk. Testing of this feature include compiling, or trying to compile, several Prolog modules libraries and non-trivial Prolog module applications such as TopLog and ClioPatria. Results are good despite some unfortunate problems in the original Prolog module code. Third, all the trouble in implementing this feature helps to improve the Logtalk code base, making it more robust, allowing it to cope with a diversity of programming practices. Fourth, it helps me to better understand the subtleties of specific Prolog module systems, something that often is not easy to learn just by sitting down and reading user manuals. One always learns good bits to adapt and pitfalls to avoid when studying other people’s code.

Hope you enjoy this new release.

P.S. Is great to finally get some free time to post some thoughts on Logtalk development after a very tiresome teaching semester. I keep dreaming about doing Logtalk development full time. Maybe one of these days…

Meta-predicate semantics

Meta-predicates allow reusing of programming patterns. Encapsulating meta-predicates in Prolog modules or Logtalk objects allows client modules and objects to reuse predicates customized by calls to local predicates. Nevertheless, meta-predicate semantics differ between Prolog and Logtalk. Logtalk meta-predicate semantics are quite simple:

Meta-arguments are always called in the meta-predicate calling context. The calling context is the object making the call to the meta-predicate.

Prolog semantics are similar but require the programmer to be aware of the differences between implicit and explicit module qualification. Consider the following meta-predicate library:

:- module(library, [my_call/1]).
:- meta_predicate(my_call(:)).
my_call(Goal) :-
    write('Calling: '), writeq(Goal), nl, call(Goal).

A simple client could be:

:- module(client, [test/1]).
:- use_module(library, [my_call/1]).
test(Me) :-

A simple test query:

?- client:test(Me).
Calling: client:me(_G230)
Me = client.

This is the expected result, so everything seems nice and clear. But consider the following seemingly innocuous changes to the client module:

:- module(client, [test/1]).
test(Me) :-

In this second version we use explicit qualification in order to call the my_goal/1 meta-predicate. Repeating our test query gives:

?- client:test(Me).
Calling: library:me(_G230)
Me = library.

In order to understand this result, we need to be aware that the :/2 operator both calls a predicate in another module and changes the calling context of the predicate to that module. The first use is expected. The second use is not obvious, is counterintuitive, and often not properly documented. We can, however, conclude that the meta-predicate definition is still working as expected as the calling context is set to the library module. If we still want the me/1 predicate to be called in the context of the client module instead, we need to explicitly qualify the meta-argument by writing:

test(Me) :-

This is an ugly solution but it will work as expected. Note that the idea of the meta_predicate/1 directive is to avoid the need for explicit qualifications in the first place. But that requires using use_module/1-2 directives for importing the meta-predicates and implicit qualification when calling them.

Explicit qualification is not an issue in Logtalk, nor does it change the calling context of a meta-predicate. Explicit qualification of a meta-predicate call sets where to start looking for the meta-predicate definition, not where to look for the meta-arguments definition.

I suspect that the contrived semantics of the :/2 operator is rooted in optimization goals. When a directive use_module/1 is used, most (if not all) Prolog compilers require the definition of the imported module to be available (thus resolving the call at compilation time). However, that doesn’t seem to be required when compiling an explicitly qualified module call. For example, using SWI-Prolog 5.7.10 or YAP 6.0, the following code compiles without errors or warnings (despite the fact that the module xpto doesn’t exist):

:- module(foo, [bar/0]).
bar :-

Thus, in this case the xpto:blabla call is resolved at runtime. In our example above with the explicit call to the my_call/1 meta-predicate, the implementation of the :/2 operator propagates the module prefix to the meta-arguments. Doing otherwise would imply knowing at runtime the original module containing the call, information that most Prolog compilers don’t keep.

In the case of Logtalk, the execution context of a predicate call always includes the calling context, allowing simpler meta-predicate semantics (and much more). Moreover, the Logtalk compiler doesn’t need to know that we’re calling a meta-predicate when compiling source code. This allows client code to be compiled independently of library code. Meta-predicate information is either used at compile time when static binding is possible or at runtime. In the second case, the caching mechanism associated with dynamic binding ensures that the necessary computations to know which arguments are meta-arguments are only performed once. There is, of course, a small performance penalty in carrying predicate execution context. I argue that’s a small price to pay in order to simplify meta-predicate semantics.

Prolog compilers are too permissive

Prolog compilers are too permissive, too forgiving… resulting in sloppy programming. Experienced programmers may shake off guilty feelings convincing themselves that’s only run-once code that nobody is going to reuse, or port, or maintain. Or that’s really not their fault. After all, if the Prolog compiler doesn’t complain and the application appears to run, why care? Novice programmers never notice until they decide to switch Prolog compilers. Or when someone comes along and asks “what if” when trying to help them debugging or porting their applications.

Consider a simple example, the arg/3 built-in predicate. Some popular Prolog compilers have long interpreted a negative term argument position as a failure rather than a programming error. When I complained (I used to do that a lot to Prolog implementers) the reply was something along the lines “we agree but our users would be upset if we break compatibility with their applications”. You can easily find similar examples. Just dig into your memories. E.g. do you remember when “logical update semantics” come along all those applications relying on “immediate update semantics” suddenly broke?

Recently I ported, or tried to port, well known Prolog libraries and interesting Prolog applications to Logtalk, both to allow running these libraries and applications in most Prolog compilers and for testing the Logtalk automatic compilation of Prolog modules as Logtalk objects (oh the pain I choose to inflict upon myself!). Common sins are missing discontiguous/1 and multifile/1 directives, non-declared dependencies, i.e. missing use_module/1 or use_module/2 directives, arbitrary goals used as directives (instead of disciplined use of the initialization/1 directive), duplicated predicate exports (specially when re-exporting predicates from imported modules), operator overdoses, term expansion clauses defined in the same file that is going to be term expanded (suddenly predicate definition order in a source file is important!), not to mention code relying on assumptions that are specific to a Prolog compiler or an operating-system. Prolog compilers should warn the users about these (and other) issues.

Another problem is hidden dependencies in Prolog “modes”. Some Prolog compilers support an “iso” mode (that usually is not the default) and some other modes for backwards compatibility. Guess what happens when users try to reuse libraries developed for the same Prolog compiler but using different modes. Or when users startups the Prolog compiler in the wrong mood… err mode. If they are lucky, they will get a stream of compilation errors.

This is a vicious circle. Prolog implementers are afraid to make their compilers more strict because they don’t want to break existing code or upset users. Users enjoy a permissive and flexible programming environment, even if they risk shooting themselves in the foot, unconsciously write non portable code, and have trouble listing their own code dependencies without using a cross-reference tool; they have little incentive and see no rewards in writing more portable and robust code.

Portability is not the only victim here. Is hard to do static code analysis (you want your applications to run faster and use less memory, don’t you?) in this mix of explicit and implicit programming assumptions. ISO standardization is another victim as Prolog implementers and library developers get defensive arguing that the required changes are only of interest to ISO nitpicks, failing to see the forest behind their own tree.

This post is a bit harsh, I know. But wake-up calls sometimes need harsh voices to complement more carrot-like incentives and initiatives. Please don’t shoot the messenger.

Using Logtalk to run Prolog module code in compilers without a module system

Logtalk can compile most Prolog modules as objects. This is accomplished by recognizing and parsing a common subset of Prolog module directives. For simple module code, it suffices to change the file name extensions from .pl to .lgt and compile the files as usual using the logtalk_load/1-2 built-in predicates. For modules that use proprietary predicates, directives, and syntax, some changes to the original code may be necessary.

As an example, assume that we want to use the SWI-Prolog modules lists, pairs, oset, and ordsets in GNU Prolog, a compiler that doesn’t support a module system. After making a copy of the original files and renaming their extensions to .lgt, we will need to remove a few specific, proprietary SWI-Prolog bits. First, we need to comment out all occurrences of the directive:

:- set_prolog_flag(generate_debug_info, false).

Second, we need to comment out in lists.lgt the calls to the must_be/2 predicate and the line:

:- use_module(error, [must_be/2]).

Still in lists.lgt we will need to replace the call to the succ/2 by:

M is N + 1,

Third, Logtalk doesn’t support the use_module/1 directive, requiring instead the use of the use_module/2 directive. Thus, we need to replace the directive:

:- use_module(library(oset)).

with the directive:

:- use_module(oset,
        [oset_int/3, oset_addel/3, oset_delel/3,
         oset_diff/3, oset_union/3]).

Finally, is_list/1 is a built-in predicate in SWI-Prolog but not in GNU Prolog. Logtalk provides its own portable versions of the is_list/1, succ/2, and must_be/2 predicates but let’s leave them out for now, for the sake of simplicity. After saving our changes, we are ready for a quick experiment:

$ gplgt
Logtalk 2.36.0
Copyright (c) 1998-2009 Paulo Moura
GNU Prolog 1.3.1
By Daniel Diaz
Copyright (C) 1999-2009 Daniel Diaz
| ?- {lists, pairs, oset, ordsets}.
| ?- pairs::map_list_to_pairs(lists::length,[[1,2],[a,b,c],[X]],Pairs).
Pairs = [2-[1,2],3-[a,b,c],1-[X]]

In SWI-Prolog the equivalent call would be:

$ swipl
Welcome to SWI-Prolog (Multi-threaded, 32 bits, Version 5.7.8)
?- pairs:map_list_to_pairs(length,[[1,2],[a,b,c],[X]],Pairs).
Pairs = [2-[1, 2], 3-[a, b, c], 1-[X]].

So, all done? Not quite. Different Prolog compilers provide different sets of built-in predicates. Module libraries use those built-in predicates. Thus, porting module code must also check for used built-in predicates. Fortunately, that’s quite easy in Logtalk: we simply compile the files with the portability compiler flag set to warning. In the case of our example, there is no problem for GNU Prolog but if we try to load our files in e.g. B-Prolog (another compiler without a module system) we will find that the lists module calls a memberchk/2 predicate that is not built-in in this compiler.

The porting process is not as straightforward as we may hoped for, but nothing is really straightforward when dealing with Prolog portability issues. The code changes needed are usually simple to apply as the example above illustrates. A small price to pay when porting module code. I’m sure you appreciate the irony of using Logtalk to run Prolog module code in module-less Prolog compilers. The subset of module directives recognized by Logtalk is enough for dealing with most Prolog module libraries. This subset could also provide a basis for a Prolog module standard based on current practice but that’s a topic for another post.

P.S. The predicate succ/2 is defined in the Logtalk library object integer. An equivalent predicate to is_list/1 is defined in the Logtalk library object list, which also defines a memberchk/2 predicate. Moreover, each Logtalk library object that defines a type contains a definition for a check/1 predicate that plays the same role as the SWI-Prolog library predicate must_be/2.

Logtalk 2.36.0 released

I released Logtalk 2.36.0 yesterday. The biggest news is support for settings files. At startup, Logtalk looks for and loads a settings.lgt file found in the startup directory. If not found, Logtalk looks for a loads a settings.lgt file in the Logtalk user folder. These settings files allows users to customize Logtalk without forcing them to edit the back-end Prolog configuration files, which often change between releases. This allows users to easily update to a new Logtalk version without the tedious work of reapplying changes to the default values of compiler flags to the new versions of the configuration files. Updated scripts and installers automatically will take care, for future releases, of preserving any existing settings file found when updating the Logtalk user folder.

Although the new settings files feature is easy to describe, a lot of work went into implementation and testing, thanks to the lack of Prolog standards for operating-system access. This includes basic functionality such as finding out the startup directory, opening a file in the current directory, and accessing operating-system environment variables. Something that should be implemented and tested during an afternoon, took me one week of work. One week that would be better spent working on e.g. the Logtalk libraries. In the end, I got settings file working for most back-end Prolog compilers on POSIX systems and, for some of them, also on Windows. Not ideal but hard to do better giving the back-end Prolog compiler limitations.

Lack of feature-parity between back-end Prolog compilers is always a problem. It turns what should be simple documentation in long lists of exceptions. Some users, completely oblivious of the nightmare that is writing portable Prolog code, end up blaming Logtalk for a limited feature set that pales when compared with recent programming languages. I’m quite tired of dealing with this portability problems. Therefore, future Logtalk releases will cut down on the number of compatible Prolog compilers. Hopefully, this will speed up future development and will enable more straightforward implementations of new functionality.

A PhD thesis you don’t want to miss

Jan Wielemaker, SWI-Prolog main developer, kindly sent me a paper copy of his PhD thesis. You may grab a PDF version at the following URL:

This is a PhD thesis you don’t want to miss. It describes Jan’s outstanding work developing SWI-Prolog as a top-notch Prolog system and applying logic programming to solve large-scale problems. Hats off to you Jan. Thanks for sharing your hard work with all of us.

P.S. I love the thesis cover.