Spring cleaning coming

The development of the current generation of Logtalk, 2.x, began on January of 1998. At that time, the ISO Prolog Standard (Part 1: General Core) was only three years old. Thus, accepting and dealing with the lack of compliance of most Prolog compilers with the standard was a sensible choice.

Back to the present. The ISO Prolog Standard is now 15 years old. Compliance with the standard greatly improved for some but not all Prolog compilers. Trying to keep minimal Logtalk compatibility with some of the existing Prolog compilers is no longer feasible. In fact, keeping compatibility with the most problematic Prolog compilers (as far as standard-compliance goes) is working as an anchor, slowing Logtalk development and preventing improvements and implementation of new features.

Some of the most problematic Prolog compilers (again, from the point-of-view of standards compliance) are (to the best of my knowledge) no longer being developed. Other Prolog compilers, actively maintained today, decided to ignore the current official and de facto standards. A few, such as IF/Prolog, provided good standard compliance but have been apparently discontinued by their developers.

I plan to ditch Logtalk compatibility with the following Prolog compilers in the upcoming 2.39.0 release: ALS Prolog, Amzi! Prolog, BinProlog, GNU Prolog, IF/Prolog, JIProlog, K-Prolog, LPA MacProlog, LPA WinProlog, Open Prolog, MasterProlog, PrologII+, Quintus Prolog. This may seem like a long list but I suspect this decision will have no consequence for most (if not all) Logtalk users. If If you think it is still worth to support some compiler in this list please contact me as soon as possible.

UPDATE: added GNU Prolog to the list of no longer supported compilers. Support for this compiler will be restored as soon as it implements the ISO Prolog standard directive multifile/1.


Lambda expressions in Logtalk

Logtalk 2.38.0, released earlier this month, adds support for lambda expressions. A simple example of a lambda expression is:

| ?- meta::map([X,Y]>>(Y is 2*X), [1,2,3], Ys).
Ys = [2,4,6]
yes

In this example, a lambda expression, [X,Y]>>(Y is 2*X), is used as an argument to the map/3 list mapping predicate, defined in the library object meta, in order to double the elements of a list of integers. Using a lambda expression avoids writing an auxiliary predicate for the sole purpose of doubling the list elements. The lambda parameters are represented by the list [X,Y], which is connected to the lambda goal, (Y is 2*X), by the (>>)/2 operator.

Currying is supported. I.e. it is possible to write a lambda expression whose goal is another lambda expression. The above example can be rewritten as:

| ?- meta::map([X]>>([Y]>>(Y is 2*X)), [1,2,3], Ys).
Ys = [2,4,6]
yes

Lambda expressions may also contain lambda free variables. I.e. variables that are global to the lambda expression. For example, using GNU Prolog as the back-end compiler, we can write:

| ?- meta::map({Z}/[X,Y]>>(Z#=X+Y), [1,2,3], Zs).
Z = _#22(3..268435455)
Zs = [_#3(2..268435454),_#66(1..268435453),_#110(0..268435452)]
yes

Logtalk uses the ISO Prolog construct {}/1 for representing the lambda free variables as this representation is often associated with set representation. Note that the order of the free variables is of no consequence (on the other hand, a list is used for the lambda parameters as their order does matter).

Both lambda free variables and lambda parameters can be any Prolog term. Consider the following example by Markus Triska:

| ?- meta::map([A-B,B-A]>>true, [1-a,2-b,3-c], Zs).
Zs = [a-1,b-2,c-3]
yes

Lambda expressions can be used, as expected, in non-deterministic queries as in the following example using SWI-Prolog as the back-end compiler and Markus Triska’s CLP(FD) library:

| ?- meta::map({Z}/[X,Y]>>(clpfd:(Z#=X+Y)), Xs, Ys).
Xs = [],
Ys = [] ;
Xs = [_G1369],
Ys = [_G1378],
_G1369+_G1378#=Z ;
Xs = [_G1579, _G1582],
Ys = [_G1591, _G1594],
_G1582+_G1594#=Z,
_G1579+_G1591#=Z ;
Xs = [_G1789, _G1792, _G1795],
Ys = [_G1804, _G1807, _G1810],
_G1795+_G1810#=Z,
_G1792+_G1807#=Z,
_G1789+_G1804#=Z ;
...

As illustrated by the above examples, Logtalk lambda expression syntax reuses the ISO Prolog construct {}/1 and the standard operators (/)/2 and (>>)/2, thus avoiding defining new operators, which is always tricky for a portable system such as Logtalk. The operator (>>)/2 was chosen as it suggests an arrow, similar to the syntax used in other languages such as OCaml and Haskell to connect lambda parameters with lambda functions. This syntax was also chosen in order to simplify parsing, error checking, and, eventually, compilation of lambda expressions. The specification of the Logtalk lambda expression syntax can be found in the Logtalk reference manual. The current Logtalk version also includes an example, lambdas, of using lambda expressions with a fair number of sample queries.

Although the first, experimental implementation of lambda expressions in Logtalk followed Ulrich Neumerkel’s proposal for lambda expression syntax in Prolog, that representation was dropped and replaced by the current one in order to avoid some of the questions raised by the Ulrich’s proposed syntax. In his proposal, lambda expressions start with the (\)/1 prefix operator, which can be seen as an approximation to the greek letter lambda (λ). However, the backslash character is usually associated in Prolog with negation as found e.g. in the standard operators (\+)/1, (\==)/2, (\=)/2, and (=\=)/2. Another issue with this operator is that users must be careful when the first lambda parameter is enclosed in parentheses. Consider the following example by Markus (using SWI-Prolog with Ulrich’s lambda library):

| ?- maplist(\(A-B)^(B-A)^true, [1-a,2-b,3-c], Zs).
false.

The goal fails because there is a missing space between the (\)/1 prefix operator and the opening parenthesis that follows. A likely trap for beginners. Ulrich’s syntax for lambda free variables requires adding a new infix operator, (+\)/1, to the base language, something that I prefer to avoid. Not to mention that this operator is too similar to the Prolog negation operator, (\+)/1. Parsing lambda parameters also needs to be careful to avoid calling a non-existing (^)/2 predicate when the lambda expression is misformed. Parsing lambda parameters is arguably simpler in Logtalk due to the use of a list plus using the (>>)/2 operator to connect the parameters with the lambda goal.

The Logtalk implementation of lambda expressions is still evolving. The current development version features improved error-checking and adds support for using a (>>)/2 lambda expression as a goal (besides as a meta-predicate closure). No optimizations are yet in-place. Thus, be aware of possible performance issues if you plan to use lambda expressions heavily in your applications. But don’t let that stop you from having fun playing with Lambda expressions in Logtalk. As always, your feedback is appreciated. Thanks to Ulrich Neumerkel, Richard O’Keefe, and Markus Triska for their lambda expression examples.


Working with data sets

A recurring question on the comp.lang.prolog newsgroup is how to work with different data sets, usually loading them from different files, without mixing the data in the plain Prolog database. Unfortunately, these questions often lack enough details for making an informed choice between several potential programming solutions. Two possible solutions are (1) load the data into suitable data structures instead of using the database and (2) use clauses to represent the data but encapsulate each data set in its own Prolog module or Logtalk object. Some combination of both solutions may also be possible. In this post, however, we’re going to sketch the second solution using Logtalk objects. For an alternative but also Logtalk-based solution please see this previous post.

Assuming all data sets are described using the same predicates, the first step is to declare these predicates. The predicate declarations can be encapsulated either in an object or in a protocol (interface). Using a protocol we could write:

:- protocol(data_set).
 
    :- public(datum_1/3).  % data set description predicates
    :- public(datum_2/5).
    ...
 
:- end_protocol.

We can now represent each data set using its own object (possibly stored in its own file). Each data set object implements the data_set protocol defined above. For example:

:- object(data_set_1,
    implements(data_set)).
 
    datum_1(a, b, c).
    ...
 
    datum_2(1, 2, 3, 4, 5).
    ...
 
:- end_object.

Assuming we have the required memory, we can load some of all of our data sets without mixing their data. But that’s not all. We can also encapsulated our data set processing code in its own object (or set of objects, or hierarchy of objects, or whatever is suitable to the complexity of our application). This object, let’s name it processor, will perform its magic by sending messages to the specific data set that we want to process. For example:

:- object(processor).
 
    :- public(compute/2).
    ...
 
    compute(DataSet, Computation) :-
        DataSet::datum_1(A, B, C),
        ...
 
:- end_object.

If the computations we wish to perform make sense as questions sent to the data sets themselves, an alternative is to move the data set predicate declarations from the data_set protocol to the processor object and make the data set objects extend the resulting object, below renamed as data_set. For example:

:- object(data_set).
 
    :- public(datum_1/3).  % data set description predicates
    :- public(datum_2/5).
    ...
    :- public(compute/1).  % computing predicates
    ...
 
    datum_3(abc, def).     % default value for datum_3/2
 
    compute(Computation) :-
        ::datum_1(A, B, C),
        ...
 
:- end_object.
 
:- object(data_set_1,
    extends(data_set)).
 
    datum_1(a, b, c).
    ...
 
    datum_2(1, 2, 3, 4, 5).
    ...
 
:- end_object.

An advantage of this solution is that the object data_set can contain default values for the data set description predicates. The ::/1 operator used above is the Logtalk operator for sending a message to self, i.e. to the data set object that received the message compute/1. If the information requested is not found in the data set object, then it will be looked for in its ancestor, where the default values are defined.

The best and most elegant solution will, of course, depend on the details on the data set processing application. For example, above we could have defined the object data_set as a class and the individual data sets as instances of this class (technically, the solution above uses prototypes).

Note that all code above is static. Individual data set description predicates may be declared dynamic (using the predicate directive dynamic/1) if we need to update them during processing of the data sets. If our application requires being able to delete data sets from memory, is simply a question of declaring the data set objects dynamic using the Logtalk object directive dynamic/0 and to use the Logtalk built-in predicate abolish_object/1 when a data set object is no longer needed.

We have only scratched the surface of the Logtalk features that we could make use in our implementation but, hopefully, it’s enough as a starting guide. Feel free to stop by the Logtalk discussion forums to further discuss this programming pattern.


Mandatory versus optional ISO Prolog standards

Currently, there are two approved ISO Prolog standards: ISO/IEC 13211-1: General core (first edition published in 1995-06-01; an errata was published recently) and ISO/IEC 13211-1: Modules (first edition published in 2000-06-01). There are also five standardization proposals being discussed: Core Revision, Definite Clause Grammars (DCGs), Globals, Threads, and Portable Operating-System Interface (POSI).

While I was a member of the WG17 standardization group (see my previous post), I always stood for a mandatory core standard, making all other standards optional when talking about ISO compliance of a specific Prolog compiler. This means that a Prolog implementer would only need to comply with the core standard in order to claim conformance to the ISO Prolog specification. The Prolog implementer could also freely chose to implement e.g. DCGs and POSI, disregarding Module, Globals, and Threads.

This view of mandatory and optional Prolog standards was shared by some but not all members of the WG17 standardization group. Some of them want to make the Module standard mandatory and pushed for making some of the other standardization proposals dependent (or at least making reference to) the approved Module standard. I find this a recipe for disaster and for ISO Prolog standards irrelevance. A standard should stand on its own merits. Despite the hard work done on the Module standard, the proposed module system is (rightfully) ignored by most Prolog implementers. Other standard proposals should not be used as a leverage for forcing implementers to implemented a flawed standard.

Why is the current Module standard flawed?

First, it specifies a new module system instead of trying to standardize current practice. Instead of helping existing module implementations to converge, the standard choses to specify a new, and therefore incompatible, module system. For example, the standard introduces a new concept of module interface (which can only be implemented by a single module!) that is not found elsewhere, even today.

Second, it specifies two different and incompatible ways of dealing with meta-predicates (the infamous colon_sets_calling_context flag). This means that two Prolog compilers can comply with this standard and still be incompatible!

Third, specifies a meta-predicate directive that prevents the specification of the number of missing arguments when working with closures. One of the consequences is that only the Prolog implementer knows how to parse and make use of meta-predicate directive for built-in predicates! But check what the inventor of the meta-predicate directive have to say about the flaws on the meta-predicate directive as specified in the Module standard.

Fourth, it makes some poor choices regarding built-in predicates and built-in directives. For example, the specification of the predicate_property/2 predicate defines the properties public and private as stating if clause/2 can be used on the predicate clauses. Thus, the properties public and private cannot be used to infer about predicate scope, which would be a much better match for most programmer expectations. Another example is the meta-predicate directive, which is ugly named metapredicate/1 (while existing module systems use, of course, a meta_predicate/1 directive!).

Fifth, the standard fails to specify a solution for renaming predicates when importing. The consequence is that library developers need to be aware of the predicate names used by other library developers in order to avoid conflicts. So much for the idea that modules provide an encapsulation mechanism. Of course, the standard also states that any module predicate (including not exported ones) can be called using explicit qualification and leaves as “an allowable extension to provide a mechanism that hides certain procedures (…)”.

Other problems with the current module standard could be described here but the ones stated above are hopefully enough to convince you that the standard needs to be thoroughly revised.

You may think that my critics of the current module standard (and WG17 policies) are mostly motivated by my work in Logtalk, which provides an alternative to the use of modules. You are wrong. True, Logtalk objects subsume module functionality and the Logtalk compiler is able to compile most modules as objects. But the Logtalk compiler also goes to great lengths to allow programmers to use both modules and objects in the same applications. Case in point: the fact that Logtalk can compile most modules as objects clearly show that there is enough common core functionality in today’s module systems to warrant a new module standard focused in current practice. But any new or revised module standard should also pay due attention to the advanced modules found on some Prolog systems such as ECLiPSe and Ciao.

In its current state, making the current module standard mandatory or required for implementing other standard proposals will either delay standardization efforts or will tie implementations to a limited and flawed module system for years to come.


Switching between Logtalk installed versions

Recent Logtalk releases include a shell script, logtalk_select, which allows easy switching between installed Logtalk versions. It’s an experimental script, loosely based on the python_select script, with two major limitations: it doesn’t update the Logtalk user folder and it’s POSIX-only. Nevertheless, it’s useful whenever you want to test your application with a new Logtalk release. Usage is quite simple. In order to list all installed versions type:

$ logtalk_select -l
Available versions: lgt2372 lgt2373 lgt2374 lgt2375

The current installed version can be checked by typing:

pmmbp:~ pmoura$ logtalk_select -s
Current version: lgt2375

In order to switch to another installed version type:

$ sudo logtalk_select lgt2374

Using sudo may or may not be needed depending on your Logtalk installation prefix and on the administrative privileges of your user account. Typing the script name without arguments prints a help script:

$ logtalk_select
This script allows switching between installed Logtalk versions
 
Usage:
logtalk_select [-vlsh] version
 
Optional arguments:
-v print version of logtalk_select
-l list available versions
-s show the currently selected version
-h help

If you’re a shell scripting wizard and able to improve the logtalk_select script, please mail me. As always, feedback and contributions are most welcome.


Stepping down as editor of ISO Prolog standardization proposals

I’m stepping down as editor of ISO Prolog standardization proposals. In recent years, I found myself responsible for four different draft proposals: Core Revision, DCGs, Threads, and POSI (Portable Operating-System Interface). My fault really. With the exception of the DCGs proposal, all the other proposals are born from my initiative. Recently I have been unable to fulfill my duties as editor of the DCGs proposal, failing to meet the deadline for its next revision. This resulted both from the proverbial lack of time and from being weary of the ISO standardization process. This process is mostly broken, unable to meet the needs of the Prolog community. I tried to fix it from within. I failed.

The last straw that resulted in my decision to end my standardization work was the lame events at the WG17 meeting at Pasadena. Tired of the lack of sensible priorities in the discussion of the standardization proposals, I succeeded I changing the meeting’s agenda, convincing the others participants to discuss the Core Revision proposal instead of spending another annual meeting discussing DCGs and Globals. Nothing wrong, of course, with DCGs and Globals. Both are worthy subjects for standardization. But fixing and improving the Core Prolog standard is the most important and urgent task. We discussed the Core Revision proposal in the morning, going from A to Z, making decisions and identifying contention aspects that would merit further discussion for the next revision of the proposal. At the end of the morning, I was pretty satisfied with the results and leaved to catch my flight back home, thus unable to attend the WG17 meeting in the afternoon. The remaining participants decided, without me as the editor of the Core Revision proposal being present, to go back and change the decisions made in the morning. I found this behavior regrettable and disrespectful. I also found some of the afternoon decisions will informed and resulting from an apparent lack of knowledge of the current, published standards. Moreover when most of participants aren’t aware that a Core Revision proposal even existed before this meeting and never participated in previous discussions about this proposal.

I still believe that standardization is vital for the future of Prolog as a programming language. But the current ISO process is the wrong way to do it. Case in point. Standardization proposals are voted by countries, instead of being voted by implementers and users groups. Implementers always decided if a proposal is worthy, by implementing it, or if is worthless, by ignoring it (think ISO Prolog Part 2: Modules). Users are the ones using and claiming for a better language.

A saddening aspect of the ISO standardization process is the lack of perception from outsiders that people working on proposals and participating in meetings are volunteers and not necessarily experts on the matters being discussed. I can understand that outsiders find some aspects of the proposals poorly formulated or completely wrong. I cannot understand that, instead of criticizing the proposals and suggesting alternatives, outsiders choose to insult the volunteers that are doing their best to improve the current standards.

Visibility and openness of the standardization work is also a problem. There are standardization discussion forums and a mailing list. Both are mostly ignored. Neither is listed in the official WG17 web site. It gets tiresome quickly to keep explaining and repeating arguments because people either want to remain anonymous when giving feedback to the current proposals or have no knowledge of the reasoning and previous discussions behind the proposals.

Improving the current, published Prolog standards, requires the courage to recognize past errors and fix them, even if that results in revoking and replacing them with hopefully better specifications. Some people refuse this path and desperately cling to the past, painting the whole standardization process in a corner. I have no patience left for this nonsense.

More could be said about the problems in the current ISO standardization process but I hope that the few ones described above are enough for you to understand my decision. Many thanks to all the people that contributed to get this far in my standardization efforts. Hopefully others with more time and energy will continue from here. My only advice, if I may give one, is: throw away the current ISO standardization process and start a new, grassroots movement that brings together implementers and user groups. Some good examples can be found in the recent web standardization processes and in other programming language communities.


ICLP 2009 invited talk on Logtalk

The slides I used on my ICLP 2009 invited talk on Logtalk are available at:

http://logtalk.org/papers/iclp2009/logtalk_iclp2009.pdf

My thanks to Patricia Hill and David S. Warren for the invitation. A special thanks for all of you that attended the invited talk. Hope you enjoyed it. I had a great time at Pasadena, meeting old friends and making new ones. The only puzzling thing was people complains about the temperature in the conference rooms, which I found quite comfortable. But that was just me ;-) There are some interesting news from the ISO Prolog standardization meeting but that’s a topic for another post.


Loading is not importing!

Recently, I posted some rumblings about my experience compiling Prolog modules as Logtalk objects. One of my pet peeves with Prolog module systems and Prolog module code is the unfortunate mix up between loading and importing. Is quite simple really, repeat after me:

Loading is not importing!

If a user wants to load a module, the ensure_loaded/1 directive shall be used. If the user want to import the public predicates of a module, the use_module/1-2 directives shall be used. Sadly, this basic distinction between these two orthogonal operations is not enforced by most Prolog compilers with a module system (despite some advice found in Prolog user manuals). Moreover, the ensure_loaded/1 directive shall only be used outside modules. Its semantics should simply be: load a file if not already loaded. There is no need for this directive to be the jack-of-all-trades. A simple test: if your Prolog compiler complains when you load two Prolog source files defining two different modules exporting the same predicate, then your Prolog compiler is broken. Complaining about conflicting imports only makes sense when importing. Otherwise a library developer would need to be omniscient about every other library developer work. Module encapsulation purpose is to avoid predicate name conflicts in the first place. Auto-importing public predicates when loading is equivalent to putting holes in module encapsulation. For what purpose? Is really that much trouble to use the use_module/1-2 directives for importing? Sure, these directives also load a file if not already loaded. They must. After all, we are saying that we want to use a module! That doesn’t mean that the ensure_loaded/1 directive should work in retribution as a use_module/1-2 directive in disguise!

As the bible says, why complicate implementation, complicate documentation, complicate semantics? In order to perpetuate mix-ups? To ruin encapsulation goals? Just to cope with sloppy user programming?


Logtalk 2.37.2 released

I released Logtalk 2.37.2 a couple of days ago. This is the third incremental release of the 2.37.x version. A total of 37 major releases, 122 when including both major and minor releases, since I started the Logtalk project in January of 1998 (the number 2 means this is the second generation of the Logtalk language; generation 3 is coming in a not so distant future). Release 2.37.3 is already being developed. This is your typical example of the open source mantra release often, release early.

The wide Prolog compatibility goals of Logtalk means that there is really no end in sight for Logtalk development. Sometimes this scares me. At other times, it pushes me out of bed. It also means that an wealthy number of Prolog compilers are alive and kicking, continuously being improved. Many of the changes in this and past releases are Prolog compatibility updates. Lack of strong standardization only complicates matters.

This release takes Logtalk support for compiling Prolog modules to a new level. This support is important for a number of reasons. First, it shows that Logtalk is not only a superset of plain Prolog but also of Prolog plus modules. Within reasonable terms, of course, giving that Prolog module dialects often remind me of the Babel tower tale. Second, compiling a module as a Logtalk object helps to identify potential problems when porting Prolog code to Logtalk. Testing of this feature include compiling, or trying to compile, several Prolog modules libraries and non-trivial Prolog module applications such as TopLog and ClioPatria. Results are good despite some unfortunate problems in the original Prolog module code. Third, all the trouble in implementing this feature helps to improve the Logtalk code base, making it more robust, allowing it to cope with a diversity of programming practices. Fourth, it helps me to better understand the subtleties of specific Prolog module systems, something that often is not easy to learn just by sitting down and reading user manuals. One always learns good bits to adapt and pitfalls to avoid when studying other people’s code.

Hope you enjoy this new release.

P.S. Is great to finally get some free time to post some thoughts on Logtalk development after a very tiresome teaching semester. I keep dreaming about doing Logtalk development full time. Maybe one of these days…


Meta-predicate semantics

Meta-predicates allow reusing of programming patterns. Encapsulating meta-predicates in Prolog modules or Logtalk objects allows client modules and objects to reuse predicates customized by calls to local predicates. Nevertheless, meta-predicate semantics differ between Prolog and Logtalk. Logtalk meta-predicate semantics are quite simple:

Meta-arguments are always called in the meta-predicate calling context. The calling context is the object making the call to the meta-predicate.

Prolog semantics are similar but require the programmer to be aware of the differences between implicit and explicit module qualification. Consider the following meta-predicate library:

:- module(library, [my_call/1]).
 
:- meta_predicate(my_call(:)).
my_call(Goal) :-
    write('Calling: '), writeq(Goal), nl, call(Goal).
 
me(library).

A simple client could be:

:- module(client, [test/1]).
 
:- use_module(library, [my_call/1]).
 
test(Me) :-
    my_call(me(Me)).
 
me(client).

A simple test query:

?- client:test(Me).
Calling: client:me(_G230)
Me = client.

This is the expected result, so everything seems nice and clear. But consider the following seemingly innocuous changes to the client module:

:- module(client, [test/1]).
 
test(Me) :-
    library:my_call(me(Me)).
 
me(client).

In this second version we use explicit qualification in order to call the my_goal/1 meta-predicate. Repeating our test query gives:

?- client:test(Me).
Calling: library:me(_G230)
Me = library.

In order to understand this result, we need to be aware that the :/2 operator both calls a predicate in another module and changes the calling context of the predicate to that module. The first use is expected. The second use is not obvious, is counterintuitive, and often not properly documented. We can, however, conclude that the meta-predicate definition is still working as expected as the calling context is set to the library module. If we still want the me/1 predicate to be called in the context of the client module instead, we need to explicitly qualify the meta-argument by writing:

test(Me) :-
    library:my_call(client:me(Me)).

This is an ugly solution but it will work as expected. Note that the idea of the meta_predicate/1 directive is to avoid the need for explicit qualifications in the first place. But that requires using use_module/1-2 directives for importing the meta-predicates and implicit qualification when calling them.

Explicit qualification is not an issue in Logtalk, nor does it change the calling context of a meta-predicate. Explicit qualification of a meta-predicate call sets where to start looking for the meta-predicate definition, not where to look for the meta-arguments definition.

I suspect that the contrived semantics of the :/2 operator is rooted in optimization goals. When a directive use_module/1 is used, most (if not all) Prolog compilers require the definition of the imported module to be available (thus resolving the call at compilation time). However, that doesn’t seem to be required when compiling an explicitly qualified module call. For example, using SWI-Prolog 5.7.10 or YAP 6.0, the following code compiles without errors or warnings (despite the fact that the module xpto doesn’t exist):

:- module(foo, [bar/0]).
 
bar :-
    xpto:blabla.

Thus, in this case the xpto:blabla call is resolved at runtime. In our example above with the explicit call to the my_call/1 meta-predicate, the implementation of the :/2 operator propagates the module prefix to the meta-arguments. Doing otherwise would imply knowing at runtime the original module containing the call, information that most Prolog compilers don’t keep.

In the case of Logtalk, the execution context of a predicate call always includes the calling context, allowing simpler meta-predicate semantics (and much more). Moreover, the Logtalk compiler doesn’t need to know that we’re calling a meta-predicate when compiling source code. This allows client code to be compiled independently of library code. Meta-predicate information is either used at compile time when static binding is possible or at runtime. In the second case, the caching mechanism associated with dynamic binding ensures that the necessary computations to know which arguments are meta-arguments are only performed once. There is, of course, a small performance penalty in carrying predicate execution context. I argue that’s a small price to pay in order to simplify meta-predicate semantics.