Feature Proposal: Current implementation is hacky and not Moo-compatible
Motivation
I cannot proceed without changing this. – not true anymore.
ImplementationClasses
are very rare if not extinct beasts these day and it was decided that they are to reconsidered later on another level. Perhaps as a part of a new plugins model but this is to be considered and discussed. Therefore I keep this proposal opened.
Description and Documentation
Currently store implementations classes are chained using inheritance enforced by
session
constructor by mangling with
@ISA
. This is definitely not the right way to do.
Frankly saying I tried to implement similar approach and eject corresponding
extends
clause into implementation classes. But unfortunately this way is even more hacky than the original
@ISA
mangling. As far as I remember
Moose
would allow some class manipulations using its
METACLASS
protocol. But Moo has achieved it's performance due to rejecting the idea of a meta-protocol and by loosing a lot of Moose flexibility.
I have two questions related to this issue. The first one is: do we really need this feature? The second one arises we the first one is 'yes': will it be possible to change the classes to match the new code?
The second question arises from the fact that because inheritance chain cannot be used the only viable replacement is a FIFO queue. Actually we have list iterator for this. But currently by looking into
RenderFormTests
code I see that implementation class may or may not pass over control to it's base class by using
SUPER::
. Assuming that this cannot be used anymore in neither form a replacement protocol needs to be developed. I'm not sure what kind of protocol would it be as few different approaches are possible. My view is to:
- Make
Foswiki::Store
a role.
- Replace the class of
store
attribute on the session
object with Foswiki::Store::Container
which would does Foswiki::Store
role. It would also build and hold the list of all store implementations as instantiated objects – including the main one (say, Foswiki::Store::PlainFile
, for instance). The main is always last on the list.
- Implementation classes will have access to the containing
Foswiki::Store::Container
object via a attribute – same way as almost any other Foswiki
object has access to session
.
- Instead of calling passing over the control to the parent method implementation class would use a construct like
$this->storeContainer->next('methodName')
. methodName
is then searched on the list until implementation object which can this method is found. Exception would be thrown if nothing is found.
This approach looks to me like the most compatible to the current code. But it would require from the third-party implementation classes (besides of mandatory conversion to Moo) to:
- do
Foswiki::Store
role
- replace all
SUPER::
calls with the mentioned above next
or something similar
- perhaps handle exceptions in some cases
Unfortunately I haven't been able to check any real life examples yet. But whatever I find there community help would still be very useful.
Impact
Implementation
--
Contributors: VadimBelman - 26 Feb 2016
Discussion
My, still incomplete
, work on
VersatileStore was designed to inherit from another Store. The idea being is that
VersatileStore is not designed to handle attachments (RDBMSes are not so good at handling files, leave that to the file system). Therefore, either
VersatileStore has to call 'next' so it's parent class handles the attachment details or attachment only method calls (so not provided by
VersatileStore at all) would be dispatched directly to the parent class. I tested this with
PlainFileStore handling attachments. Therefore, ImplementationClasses would allow the administrator to set this to: [ '
PlainFileStore', '
VersatileStore' ] or [ 'RCSLite', '
VersatileStore' ] as the administrator prefers (be aware that my ImplementationClasses ordering may be backwards).
Even more flexibility is possible, should Versatile after successfully doing a saveTopic then call next:saveTopic to allow the parent to also save the topic? I would argue that it should. This would allow the topic to be stored in the Versatile DB (currently MariaDB or MySQL thus possibly on another server).
PlainFileStore would also save the topic thus providing a simple plain-text copy and backup, which is also easy for the admin to grep (and other tools) from the command line. However, the administrator may choose not to save the topics twice (it will slow write actions to the Store), in that case the admin can set ImplementationClasses to: [ '
PlainFileStore', 'TopicSinkStore', '
VersatileStore' ]. The advantage of this approach is that the TopicSinkStore is written and tested once and can be used in combination with other Stores. Store authors never have to include a config switch to choose whether to pass thru topics or not.
Implementing this via roles and a FIFO queue will of course work (with some Store changes), but it will be a little slower than method dispatch, as any Store (so far) is IO bound that may not be that important. However, the loop will need to check for things like can('saveAttachment'). It may also be possible to perform some initialization to build a hash structure and speed some of this up, but this is probably premature optimization.
--
JulianLevens - 26 Feb 2016 - 09:30
Could I arrange
VersatileStore to do
extends $Foswiki::cfg{VersatileStore}{ParentStore};
in a Moo class? If that works then we could possibly drop ImplementationClasses. We do lose some flexibility, but then we don't really use or need that flexibility at the moment anyway. As and when we need that flexibility someone will need to write a feature proposal to introduce a Moo compatible 'ImplementationClasses' concept.
As noted by
GeorgeClark in IRC MongoDB may also use ImplementationClasses, but I believe that was never standardized work: I think it had local hacks to core to make it all work. I am not even sure if anybody is actively using it. We'd better contact
PaulHarvey to see if he knows it's current status.
--
JulianLevens - 26 Feb 2016 - 11:02
A lot of points to discuss... Lets start from the end.
$Foswiki::cfg
might be used. Didn't test it yet but generally see no big problem with this. My objections:
- I'd like to get away from the global
cfg
variable and move it into a session
(or whatever it becomes later) attribute. I wanna see it like $this->session->config('Domain.Variable')
or alike. Otherwise we're in a problem when time comes to handle multiple domains in a single process (PSGI). Impact on extends
is obvious. Possible but it'd be a hack again.
- Again, PSGI. Once defined inheritance cannot be changed. If another site wants to be configured for different order – we're doomed.
- Moo method inheritance is implemented using
around
keyword. Not sure how it will work in such conditions. Though it should but I cannot guarantee.
- Minor but still relevant: manual inheritance definition would be hard for admins to understand and to track errors down.
Thinking of PSGI we're by definition bound to static classes without dynamic modifications. This returns us to the first comment. BTW, I would revert the order of definition in the
ImplementationClasses to more human-readable FIFO with
PlainFileStore
being the last on the list.
TopicSinkStore
– interesting idea but it complicates admin life again. Besides, how would it find out that a method from previous store (
VersatilleStore
in our case) is missing and has to be passed over to the
PlainFileStore
? But we may have two additional config settings here:
- One would globally disable passing over of a method.
- Another would be on a per-implementation class level and disable it for an individual class. Something like
$Foswiki::cfg{VersatileStore}{Finalize}
.
Both options could be handled by the container object unless I miss something here.
It is also came to my mind that filesystem handling could be done by a separate process. It would reduce UI latency.
--
VadimBelman - 26 Feb 2016
I think Sven was trying to achieve a kind of mixin functionality i.e. a store may override selected functionality of an underlying store without knowing what implementation that underlying store is. Another scenario is stores coexisting as peers, though that could be achieved using a third store implementation that acted as an adapter.
This mixin functionality was destined to replace the existing "cache hook" functionality employed by the likes of the
DBIStoreContrib. I simplified it somewhat from Sven's original code, but I may have misunderstood what he was trying to achieve. "Sven" and "clear descriptive comments" have never appeared in the same sentence (before now).
--
Main.CrawfordCurrie - 26 Feb 2016 - 17:47
The more I think about it the more get convinced that this functionality is better be dropped for now. It could later be revived on a new level – as new plugins. Because this is what is going on now here: third party code which butts into standard code flow, implements handlers, and gets called in certain order.
Trying to reimplement it in a different way without introducing a new generic mechanism for such code is just a waste of time.
--
VadimBelman - 26 Feb 2016
About the
A lot of points to discuss... Lets start from the end..
Yes, here are many point what could be discussed about the Foswiki and its internals. But now, i'm perplexed a bit. The current Moofication is about
- designing an new Foswiki - e.g. we were talking here about the FoswikiNG?
- or about converting the current Foswiki to an "better" code - but with backward compatibility?
Don't get me wrong -
i'm definitely agree the redesign. Best when we start the
TOTAL REDESIGN. But, if this work is about such big redesign, we really
need talk in details what features we want to have - and not only blindly "hack some new OO".
Few comments:
- I'd like to get away from the global cfg variable, Me absolutely agree!! But if you want do such change, we need talk about how the new configuration subsystem should work.
- For example: parts of the config must be loaded earlier as the
app.psgi
is even loaded. Because we need to know some config values for the server-starter script levels. Also, many things (authentication, session management, access control etc.etc.) will be done in the app.psgi
, so parts of the config must be already loaded - because we need configure them. But (on the other side) we need strictly only a part of the config, because many other parts of the config if dependent of the particular foswiki instance. So, we need divide the config to more parts. Maybe, this isn't clear for now - in short - just imagine: One codebase, one Plack-server, one perl interpreter in the memory - but multiple wikis (for example for multiple domains) - (aka wiki farm). For this, we need divide and conquer - some configuration items are "server-specific", some are "wiki-instance" specific and some are "request specific"...
- How such new configuration will be serialized and stored? The current Data::Dumper serialization is not the best - causes many interoperability problems. Would be much better to use YAML or such. Or even pluggable configuration backend. (Ini-style files, YAML, JSON, XML, PLIST, DataDumper). Will we have multiple config layers? (Extremely good to have a possibility define two different layers: production and development).
- Many-many-many such questions....
- Currently store implementations classes are chained using inheritance enforced by session constructor by mangling with @ISA. This is definitely not the right way to do.. Absolutely agree!!! But, can't agree with the ... Lets start from the end. part of the first citation. If we talking about the REDESIGN, we should not start "at the end". Me must start with at the beginning and define what features should have the new Store. For example:
- Don't mix the "type" of the store e.g. PlainFile, RcsLite, DbiStore - (aka class or "implementation") with the instance of the given implementation. Two different things. If we talking about the redesign would be nice to have for example:
- Possibility have multiple PlainFIle based "stores" in one wiki. For example, the
System
web will use the PlainFile store with some defined "root" in the filesystem, and the same wiki could to use another PlainFile store instance (with different "root" on the filesystem) for the all other webs. Or, we could have "mounted" multiple DBI-based stores into one wiki - each with own dsn, and each store will be used for the defined webs. etc..etc..
- Or even more weird idea: Separate the store fully from the core to its own tcp-server process? (aka CMIS server?, easy WebDav, possibility sharing webs between different wikis (on the farm) etc...) - okay, probably you will not agree with this one - I talking about it only as an demonstration of the many-many many-many opened questions (in the FoswikiNG).
So, please decide: We were going to talk about the
FoswikiNG?
- if not - (so we still talking about the "current" Foswiki)
- every new feature must go thru its own FeatureRequest proposal. (e.g. such proposed
$cfg
changes, store changes)
- and (important!) the changes should be ( mostly :/ ) backward compatible
- if yes (so we talking about the FoswikiNG), then:
- we should create an FoswikiNG web on the trunk
- and start talk from the beginning - about the zillions of possible new features which the new FoswikiNG should/could have (aka forget the
$Foswiki::cfg{Bla}{Blabla}
)
- so without backward compatibility concerns - so, do no talk about "how the current implementations are done" - because they're "not the best"
- and therefore we can't mix together the two "designs".
Again - me agree and 100% supporting the REDESIGN.
- Just we need to be clear - about WHAT we talking - and
- do not mix together the current Foswiki-design with some (hypothetical) FoswikiNG design.
I'm pretty sure that the
CatDog approach will cause many problems in the future. If here
will be some backward
incompatible changes - the changes should be
big enough to be
"forward compatible" - e.g. they're should allow many of the possible future features. Otherwise in the future we will have many small but backward-incompatible changes instead of one big.
So sorry for the "controversies" - but this is how me feels. And, just simply ignore me, if someone don't agree.
--
JozefMojzis - 26 Feb 2016
Hm.. next time i must "refresh" the page, before posting the comment.
The last Vadim's comment makes mine unnecessary.
--
JozefMojzis - 26 Feb 2016
Jozef, your comment is not unnecessary but perhaps a little bit redundant.
Few interesting points you've touched:
- No, we're not speaking of redesign, that's for sure. I don't think that replacing
$Foswiki::cfg{Some}{Key}
with $this->session->cfg->{Some}{Key}
is a big compatibility issue. I forecast similar destiny for some other global variables like macros hash, for example. Your idea about different cfg backends is good but it may require either storing a TIEd hash reference in the cfg
attribute or using the syntax I have mentioned above or making cfg
a method which returns a hashref. Whatever.
- Early configuration. Well, yes, over the processing life cycle we have two major states: with or without initialized app which is built around an incoming request. Sure enough that uninitialized app means we know nothing about which site/domain is being served at this point and consequently it's ok to have a global configuration variable. For that matter Foswiki already deals with setlib.cfg and LocalSite.cfg. To my view the names do perfectly fit into the stages we will have.
- Stores – yes, you've got it right. Except, may be, for multiple base stores because this would complicate the code and raise a lot of questionable issues. So, this is where implementation classes/plugins/filters or whatever name you give to them are to be really useful. I would take it even one step further: as I once stated it would be great to have some of META keys to be objects.
Foswiki::Meta
must let these object decide if they want to store themselves on their own. For example, a topic might be saved in a DB while it's attachments store themselves on a filesystem...
But this is the questions we shall talk about in
general planning topic.
--
VadimBelman - 26 Feb 2016
Just recalled that there is another interesting point I thought about too. Having store engine as a separate process listening on a tcp port is a great idea which I thought about too. But if we think about hosting options where having a running daemon is not possible then it becomes just a dream. Yet it has a very significant drawback in a case the daemon crashes.
But there is another option. It it vulnerable to crashing too but the impact on overall functionality would be much less significant. Foswiki may fork a process and communicate with it by the means of pipes. If it crashes then we would just restart it ASAP either by reacting on SIGCHLD or the next time a request to store is made. Drawback of this approach is much higher memory requirements.
Finally, there is a hybrid solution for PSGI. The daemon could be forked as a part of early stage init and kept alive by controlling the SIGCHLD.
But remember that any of this would require additional level of control within Foswiki which would have some attributes of transactions. I.e. the object being saved would have to stay in memory until we get a confirmation from the process that saving succeeded. Another problems: authorization and reading. The former may not be confirmed on several different stages causing a number of problems. The latter would not benefit from this approach and worse – would add to the latency.
--
VadimBelman - 26 Feb 2016
The existing
ImplementationClasses concept can work by a redesign.
All Stores should use the following
-
extends $Foswiki::cfg{ThisStore}{ParentStore};
- I have tested
extends $parent;
— it does work
If you had {ImplementationClasses} = [ 'VersatileStore', 'TopicSinkStore', 'PlainFileStore' ] then that becomes:
- {VersatileStore}{Parent} = 'Foswiki::Store::TopicSink'
- {TopicSinkStore}{Parent} = 'Foswiki::Store::PlainFile'
- {PlainFileStore}{Parent} = 'Foswiki::Store'
After all, that's
all the fancy dynamic list of classes really does anyway.
To re-iterate it's just a class hierarchy — nothing more
Foswiki.pm would need to 'use $Foswiki::cfg{Store}{Implementation};' which would be the primary store in the hierarchy and {ImplementationClasses} can disappear.
Some checkers would be in order to check the hierarchy is valid. In general configure will have the responsibility to ensure the Store config is right, or at least guide the admin to sensible choices.
My
TopicSinkStore concept is not a mixin as it's normally defined. In reality it's just another class on the hierarchy that implements Foswiki::Store.
That said it's a useful class to be able to 'mix' into the hierarchy.
Assuming Versatile ->
PlainFile as the class hierarchy
- Versatile is asked to saveTopic and this is added to the DB
- Versatile at the end of saveTopic method calls SUPER
- SUPER happens to be PlainFile which also performs saveTopic and writes out the topic to the file-syetem
Hence the topic is saved both by Versatile and
PlainFile
Assuming Versatile ->
TopicSink ->
PlainFile as the class hierarchy
- Versatile is asked to saveTopic and this is added to the DB
- Versatile at the end of saveTopic method calls SUPER
- SUPER happens to be TopicSink which deliberately does not call it's SUPER
Hence the topic is saved only by Versatile (as saveTopic does not get passed thru to
PlainFile)
TopicSink only needs to care about passing on read type method calls but blocking (i.e. sinking) the pass thru of write type methods. It will not need any more knowledge than that.
So the 'mixing' or not of
TopicSink between Stores allows the pass-thru or not of topic saves etc. This will be an admin choice via configure.
To reiterate the advantages I mentioned above:
- A Store author does not need to implement TopicSink functionality into their store
- If they did it would probably be done by a configure flag and conditional logic — i.e. a lot of extra testing required
- Each store that re-implements this concept could re-introduce these bugs
Or to put it another: use a well tested class that provides that functionality for free for any store.
Note that
TopicSinkStore is still an idea with no implementation. The Store API probably may need some tweaking: often just clarification in the docs. As a concept it has a lot of merit.
Just to hammer it home: it's just a Store class hierarchy without genuine mixin magic — therefore it's easy to implement differently.
--
JulianLevens - 29 Feb 2016 - 18:05
Julian, in your proposal you miss two issues which being put together invalidate your vision.
- We shall remember about the future PSGI. A single codebase may serve different virtual hosts with different configurations. I.e. – they may have different orders of ImplementationClasses.
- A Moo class once instantiated cannot be changed anymore. It means one wouldn't be able to replace the base class in first place.
These two items are mutually incompatible. We can forget about PSGI for a minute and think of reconfiguring a single host in mod_perl,
FastCGI or any other semi-persistent environment – the order would not be possible to change without restarting either the whole web-server or a
FastCGI process.
Another problem which is not critical but it demonstrates how fragile this approach is. If you look into
RobustnessTests.pm there is
Foswiki::Store::UnitTestFilter
package defined which is used to test
ImplementationClasses. The problem with this class is that it cannot be modified dynamically too. The error message generated is the same as if the class has been instantiated already though it never was. I didn't dig deeply into the issue because this would a waste of time. But that was the reason I initiated this proposal.
There is a nuance too which I wasn't able to check yet.
SUPER::
isn't used with Moo (except for some very-very-very special cases). Instead there is a keyword
around
. I don't know how well would it play in a dynamic inheritance environment.
Yet I don't like the idea of manual class manipulation as it may have unpredictable results.
--
VadimBelman - 29 Feb 2016
I think we are a little at cross purposes.
Your Motivation is stated above as: "I cannot proceed without changing this."
Therefore, I was trying to argue that we can logically Moo-ify ImplementationClasses with relatively small changes and let you proceed.
That is to try for a like for like change rather than redesign it for a possible future. You are already eating a 7 course meal with Moo-ification and I was trying to limit the work to an extra bite or two rather than a few more courses. However, if you feel hungry enough to build a queuing system — I have no objections indeed it's a better design. (Of course I could be wrong in that writing and testing a queuing system is only a few extra bites)
I am well aware of the limitations of this current architecture (Moo-ified or not). When developing Versatile I would repeatedly do the equivalent of bulk_copy.pl from an existing Store into Versatile. The limitations of the architecture meant that I could not copy from a Versatile+RcsLite combination to a Versatile+PlainFile combination. Once Versatile.pm has been loaded in whichever parent it had the 1st time is stuck in place. There were also issue with config variables which is why I proposed
StoresShouldBePassedConfigHash. I now realize that concept should be expanded to handle multiple configs passed around in hashes or even something more sophisticated, which I've seen you discuss elsewhere.
bulk_copy overcomes these limitations by forking. Therefore each fork has it's own Foswiki config and Store hierarchy.
Whatever you decide to do I would like to clarify certain points.
What I am proposing with
extends $Foswiki::cfg{ThisStore}{Parent};
is actually
static without any manipulation. At the time 'extends' is called, the value passed is a constant. Indeed it's the fact that it's constant is why we can only have one hierarchy (strictly no reuse of the same Store in multiple hierarchies). The class instantiation is actually happening at compile time. Changing the value of
$Foswiki::cfg{ThisStore}{Parent}
after class hierarchy is instantiated will not change that class.
I looked at the code in Versatile which carefully calls SUPER to ensure the parent Store will handle attachments.
sub new {
my $class = shift;
my $this = $class->SUPER::new(@_);
...
return $this;
}
sub finish {
my $this = shift;
...
$this->SUPER::finish();
}
I suspect that Moo may negate the need for these. Nonetheless, I note that new() has a 'before' call to SUPER whereas finish() has and after call to SUPER. Therefore a queuing system must be aware of the need to iterate forward or backward depending on the method call.
sub moveTopic {
my ( $this, $oldTopicObject, $newTopicObject, $cUID ) = @_;
...
$this->SUPER::moveTopic($oldTopicObject, $newTopicObject, $cUID);
}
sub moveWeb {
my ( $this, $oldWebObject, $newWebObject, $cUID ) = @_;
...
$this->SUPER::moveWeb($oldWebObject, $newWebObject, $cUID);
}
This is just passing on the call (after-wards). Hence each Store can ensure all it's topics/webs have been moved. Just call moveTopic/moveWeb for each Store in the queue starting with the primaryStore.
sub getRevisionHistory {
my ( $this, $meta, $attachment ) = @_;
return $this->SUPER::getRevisionHistory($meta, $attachment)
if $attachment;
my $ti = $this->_topicInfo($meta->web, $meta->topic);
return Foswiki::Iterator::NumberRangeIterator->new($ti->{version}, 1);
}
sub getVersionInfo {
my ( $this, $meta, $version, $attachment ) = @_;
return $this->SUPER::getVersionInfo($meta, $version, $attachment)
if $attachment;
return $this->_topicInfo($meta->web, $meta->topic, $version);
}
The above are store reads and once you have a successful read, then return that. This principle applies to all read actions including readTopic.
Note that the above have conditional logic so I'm not sure whether before/after/around applies. I'm not sure how you would handle that in a queuing system either. A simple ploy would be to make these two method calls each: e.g.
getTopicRevisionistory($this, $meta)
and
getAttachmentRevisionHistory($this, $meta, $attachment)
. However, I noticed changes in the git logs going the other direction :P.
sub remove {
my ( $this, $cUID, $meta, $attachment ) = @_;
if($attachment) {
$this->SUPER::remove($cUID, $meta, $attachment);
return;
}
# Remove topic
...
}
remove would have to be applied to each Store starting with the primary Store. In this case in a queuing system I could replace the above with
return if $attachment;
.
sub create {
my $this = shift;
$this->SUPER::create(@_);
# Create topic tables
...
return;
}
This method is not part of the Store API — at least not yet (needs a feature proposal). When calling bulk_copy or equivalent you will need to destroy the existing data and recreate the base underlying structures. This was mainly important in development I was constantly re-loading Stores and needed a Store specific method to re-create a base Store from scratch. e.g.
PlainFile would
rm -rf $Foswiki::cfg{DataDdir}/*
whereas Versatile would drop and create all the tables.
The SUPER call here is just to ensure all Stores in the hierarchy get the chance to
create
with the primary Store 1st.
None of this is rocket science, I reviewed these in
VersatileStore to remind me of these intricacies and thought I should share my thoughts. Note that most of the method calls are Topic or Attachment specific, so I did not need to concern myself with SUPER, and I probably missed cases where it's strictly required. It occurs to me now that these details are not documented in the Store API which does need to be discussed and agreed upon. That's a plus point for a queuing system: that would get these details right once and relieve Store authors that burden.
--
JulianLevens - 01 Mar 2016 - 10:34
I'm so sorry, this is my fault! Eventually we agreed on IRC that
ImplementationClasses can wait for better times as they're not used in real life. So, for the moment I just commented out that code and hope to get back to it on a new level with new plugins model perhaps.
Sorry again.
PS. I do not have enough time to read all of your last comment right now. Hope to have time later today.
--
VadimBelman - 01 Mar 2016
I have finished reading your notes on the methods and inheritance relations. It is very useful as an overview to have the whole picture right in front of ones eyes.
So far my thoughts about the control over the execution order are fitting into two major patterns:
- A method may report to the containing object via a returned-hash protocol. I.e., it would have to return a kind of structure which may contain a command for the container. For example, it may follow the following protocol:
sub getRevisionHistory {
...
return (flow => 'stop', rc => "return value")
if $condition_met;
}
But I don't like it. Alternative is exception-based protocol. Say, we have
Foswiki::Cntl
base exception and
Foswiki::Cntl::Cmd
deriving from it.
Cmd
could be
Stop
or whatever other commands we may need.
sub getRevisionHistory {
...
Foswiki::Cntl::Stop->throw( rc => "return value" )
if $condition_met;
}
Sure, this is only a draft, better solutions might pop up. What is important here is that by developing this design we must think about plugins in first place where similar issues will need to be resolved.
--
VadimBelman - 01 Mar 2016
Abandoned
--
MichaelDaum - 14 Oct 2024