Posted
about 11 years
ago
by
A.Bouchez
Cape Cod Gunny just wrote a blog article about how to replace a global
variable by a static class instance.
But I had to react!
Using such static declaration is just another way of creating a global
variable.
This is just a global variable in
... [More]
disguise.
In fact, the generated asm will be just like a global variable!
It encapsulates the global declaration within a class name space, but it is
still IMHO a very wrong design.
I've seen so many C# or Java code which used such a pattern (there is no global
variable in those languages), and it has the same disadvantages as global
variables.
Just like the singleton
syndrome,
Code is just not re-entrant nor thread-safe.
Nightmare to debug and let evolve. Why globals are (almost always) wrong
Imagine one day you would like to re-use the code of your app, then run your
business logic code on a server.
You would like to re-use your existing code.
But since all your client instances would have to share the same global data,
stored in static variables, you would be stuck to make the running instances
un-coupled.
This just breaks the SOLID principles.
Both the Single responsibility principles, and the Dependency injection, to
name the biggest two.
What should be done instead of such deprecated globals is to use true classes
(or even better interfaces), then use
Inversion Of Control / Dependency Injection.
When globals may be used
If you look at our mORMot source code, you will find some global
variables here and there.
You won't find many global variable instances defined in the
interface section of our units, but you may find some global
variables in the implementation of well-identified shared
functions.
I found out the following rules
(about singleton potential use) to be of good interest:
To decide whether a class is truly a singleton, you must ask yourself some
questions.
Will every application use this class exactly the same way?
(exactly is the key word)
Will every application ever need only one instance of
this class? (ever and one are the key words)
Should the clients of this class be unaware of the application they are
part of?
If you answer yes to all three questions, then you've found a
singleton.
The key points here are that a class is only a singleton if all applications
treat it exactly the same and if its clients can use the class without an
application context.
These are in fact the use cases where we define some global variables.
Even more, we use a GarbageCollector: TObjectList instance, to
handle some global variables which would last for the whole application.
Most of those globals are defined at very low level, in
SynCommons.pas:
For process-wide logging purpose;
For RTTI cache;
For some class-level VMT tricks
For WinAnsi / CurrentAnsi encoding conversion classes;
For global class registration, with virtual constructors;
For execution context, e.g. the
ServiceContextthreadvar.
Note that some of those can be by-passed, and you can use e.g. uncoupled
logging, with a private family, if needed.
I just hope this little blog article may help you refine your own coding
style, and let your projects be more maintainable.
[Less]
|
Posted
about 11 years
ago
by
A.Bouchez
Just wanted to share some awesome SOA revolutionary idea, on 4th of
July...
You should take a look at http://devnull-as-a-service.com !
DAAS rocks!
We will write certainly a native mORMot provider for this great
data provider.
This is another great Open Source project (full code is included).
|
Posted
about 11 years
ago
by
A.Bouchez
As we already stated here, the Delphi compiler for the Win64 target performs
well, as soon as you by-pass the RTL and its sub-optimized implementation -
as we do for mORMot.
In fact, our huge set of regression tests perform only 10%
slower on Win64
... [More]
, when compared to Win32.
But we got access to much more memory - which is not a huge gain for a
mORMot server, which uses very little of RAM - so may be useful in
some cases, when you need a lot of structures to be loaded in your RAM.
Slowdown on Win64 is mostly due to biggest pointer size, which will use
twice the memory, hence may generate a larger number of cache misses (failed
attempts to read or write a piece of data in the cache, which results in a main
memory access with much longer latency).
But in Delphi, apart from the RTL which may need more tuning about performance
(but seems not to be a priority
on Embarcadero side), is also sometimes less efficient when generating the
code.
For instance, sounds like if case ... of ... end statements do not
generated branch table
instructions on Win64, whereas it does for Win32 - and FPC does for any x64
platform it supports. As stated by Wikipedia:
In computer programming, a branch table or jump table is a method of
transferring program control (branching) to another part of a program (or a
different program that may have been dynamically loaded) using a table of
branch or jump instructions. It is a form of multiway branch. The branch table
construction is commonly used when programming in assembly language but may
also be generated by a compiler, especially when implementing an optimized
switch statement where known, small ranges are involved with few gaps.
Here is a simple case ... of ... end statement, as found in our
SynCrossPlatformJSON.pas unit:
case VType of
{$ifndef NEXTGEN}
vtString: result := string(VString^);
vtAnsiString: result := string(AnsiString(VAnsiString));
vtChar: result := string(VChar);
vtPChar: result := string(VPChar);
vtWideString: result := string(WideString(VWideString));
{$endif}
{$ifdef UNICODE}
vtUnicodeString: result := string(VUnicodeString);
{$endif}
vtPWideChar: result := string(VPWideChar);
vtWideChar: result := string(VWideChar);
vtBoolean: if VBoolean then result := '1' else result := '0';
vtInteger: result := IntToStr(VInteger);
vtInt64: result := IntToStr(VInt64^);
vtCurrency: DoubleToJSON(VCurrency^,result);
vtExtended: DoubleToJSON(VExtended^,result);
vtObject: result := ObjectToJSON(VObject);
vtVariant: if TVarData(VVariant^).VType<=varNull then
result := 'null' else begin
wasString := VarIsStr(VVariant^);
result := VVariant^;
end;
else result := '';
end;
Here is the code generated by Delphi on Win64:
SynCrossPlatformJSON.pas.727: case VType of
0000000000560F40 480FB64608 movzx rax,byte ptr [rsi+$08]
0000000000560F45 4883F809 cmp rax,$09
0000000000560F49 7F6B jnle VarRecToValue + $B6
0000000000560F4B 4883F809 cmp rax,$09
0000000000560F4F 0F842F010000 jz VarRecToValue + $184
0000000000560F55 4883F803 cmp rax,$03
0000000000560F59 7F33 jnle VarRecToValue + $8E
0000000000560F5B 4883F803 cmp rax,$03
0000000000560F5F 0F8496010000 jz VarRecToValue + $1FB
0000000000560F65 4883E801 sub rax,$01
0000000000560F69 4883F8FF cmp rax,-$01
0000000000560F6D 0F844F010000 jz VarRecToValue + $1C2
0000000000560F73 4885C0 test rax,rax
0000000000560F76 0F8419010000 jz VarRecToValue + $195
0000000000560F7C 4883E801 sub rax,$01
0000000000560F80 4885C0 test rax,rax
0000000000560F83 0F85C4010000 jnz VarRecToValue + $24D
0000000000560F89 E9A5000000 jmp VarRecToValue + $133
0000000000560F8E 4883E804 sub rax,$04
0000000000560F92 4885C0 test rax,rax
0000000000560F95 747C jz VarRecToValue + $113
0000000000560F97 4883E802 sub rax,$02
0000000000560F9B 4885C0 test rax,rax
0000000000560F9E 0F84A0000000 jz VarRecToValue + $144
0000000000560FA4 4883E801 sub rax,$01
0000000000560FA8 4885C0 test rax,rax
0000000000560FAB 0F859C010000 jnz VarRecToValue + $24D
0000000000560FB1 E956010000 jmp VarRecToValue + $20C
0000000000560FB6 4883F80D cmp rax,$0d
0000000000560FBA 7F32 jnle VarRecToValue + $EE
0000000000560FBC 4883F80D cmp rax,$0d
0000000000560FC0 0F8456010000 jz VarRecToValue + $21C
0000000000560FC6 4883E80A sub rax,$0a
0000000000560FCA 4885C0 test rax,rax
0000000000560FCD 0F84A1000000 jz VarRecToValue + $174
0000000000560FD3 4883E801 sub rax,$01
0000000000560FD7 4885C0 test rax,rax
0000000000560FDA 7447 jz VarRecToValue + $123
0000000000560FDC 4883E801 sub rax,$01
0000000000560FE0 4885C0 test rax,rax
0000000000560FE3 0F8564010000 jnz VarRecToValue + $24D
0000000000560FE9 E9F3000000 jmp VarRecToValue + $1E1
0000000000560FEE 4883E80F sub rax,$0f
0000000000560FF2 4885C0 test rax,rax
0000000000560FF5 745D jz VarRecToValue + $154
0000000000560FF7 4883E801 sub rax,$01
0000000000560FFB 4885C0 test rax,rax
0000000000560FFE 0F84CD000000 jz VarRecToValue + $1D1
0000000000561004 4883E801 sub rax,$01
0000000000561008 4885C0 test rax,rax
000000000056100B 0F853C010000 jnz VarRecToValue + $24D
0000000000561011 EB51 jmp VarRecToValue + $164
And here is the code generated by FPC on Win64:
mov eax, dword ptr [rsi] ; 0027 _ 8B. 06
cmp eax, 2 ; 0029 _ 83. F8, 02
jc ?_0067 ; 002C _ 72, 15
cmp eax, 3 ; 002E _ 83. F8, 03
stc ; 0031 _ F9
jz ?_0067 ; 0032 _ 74, 0F
sub eax, 12 ; 0034 _ 83. E8, 0C
cmp eax, 2 ; 0037 _ 83. F8, 02
jc ?_0067 ; 003A _ 72, 07
cmp eax, 4 ; 003C _ 83. F8, 04
stc ; 003F _ F9
jz ?_0067 ; 0040 _ 74, 01
clc ; 0042 _ F8
?_0067: setae byte ptr [rdi] ; 0043 _ 0F 93. 07
mov rax, qword ptr [rsi] ; 0046 _ 48: 8B. 06
cmp rax, 16 ; 0049 _ 48: 83. F8, 10
ja ?_0084 ; 004D _ 0F 87, 000001F6
lea rdx, [?_0086] ; 0053 _ 48: 8D. 15, 00000000(rel)
movsxd rax, dword ptr [rdx+rax*4] ; 005A _ 48: 63. 04 82
lea rax, [rdx+rax] ; 005E _ 48: 8D. 04 02
jmp rax ; 0062 _ FF. E0
...
?_0086 label dword ; switch/case jump table
dd ?_0077-$ ; 0000 _ 00000172 (rel)
dd ?_0075-$+4H ; 0004 _ 00000148 (rel)
dd ?_0070-$+8H ; 0008 _ 00000098 (rel)
dd ?_0080-$+0CH ; 000C _ 000001DF (rel)
dd ?_0068-$+10H ; 0010 _ 00000078 (rel)
dd ?_0084-$+14H ; 0014 _ 00000261 (rel)
dd ?_0071-$+18H ; 0018 _ 000000CC (rel)
dd ?_0081-$+1CH ; 001C _ 00000204 (rel)
dd ?_0084-$+20H ; 0020 _ 0000026D (rel)
dd ?_0074-$+24H ; 0024 _ 00000144 (rel)
dd ?_0073-$+28H ; 0028 _ 00000124 (rel)
dd ?_0069-$+2CH ; 002C _ 000000AB (rel)
dd ?_0079-$+30H ; 0030 _ 000001E0 (rel)
dd ?_0082-$+34H ; 0034 _ 0000023D (rel)
dd ?_0084-$+38H ; 0038 _ 00000285 (rel)
dd ?_0072-$+3CH ; 003C _ 00000114 (rel)
dd ?_0078-$+40H ; 0040 _ 000001CF (rel)
As you can see, the FPC 2.7.1 compiler generates a branch table, so
will perform much better.
The single movsxd rax, dword ptr [rdx+rax*4]
instruction replaces a huge list of cmp/jz statements.
Sounds like if the Open Source FreePascal compiler generates better
code than Delphi's,
not only for floating-point computations, but for simple general-usage
code.
BTW the floating-point regression issue in XE6 was marked as resolved in
QC and fixed in XE6 update 1. But still slower than FPC on 32
bit... [Less]
|
Posted
about 11 years
ago
by
A.Bouchez
We did some cleaning in the mORMot official
RoadMap.
Now feature requests tickets will detail all to-do items we would like to
implement.
Current framework RoadMap and implementation is in fact going into
a pragmatic direction.
No need to make all
... [More]
framework's unit compatible at once: so we introduced some
client-dedicated units, without any dependency on SynCommons.pas.
We would like to implement (in this order):
Cross-platform clients (Delphi FMX, SmartMobileStudio, FPC) [09ae8513eb] [168eb753e5] [d7e5521da5];
MVC web support [bd94c11ab1];
Cross-platform server via FPC [3a79adc10f];
Event-driven features [aa230e5299].
The CrossPlatform folder
already contains units which compile under all Delphi compilers (VCL and FMX),
and FPC.
But perhaps we would move the server to Linux, either via FPC, or using
Delphi itself! Recently, Linux support appeared on the official Embarcadero
Delphi roadmap, as "Linux server support for DataSnap and WebBroker,
including RTL and database access".
Perhaps we would consider using this approach, if it is quicker to implement
than FPC.
The current FPC implementation is slow down by diverse RTTI details, whereas we
may guess that Delphi would be more consistent on this point.
But if the Linux future Delphi compiler is NEXTGEN-based, it would be a
show-stopper to
us, since we reject the backward incompatibility breaks introduced by this
compiler branch.
We just hope that the future Linux Delphi compiler will be based on the main
x86/x64 compiler, just like Mac OS X compiler, not on the NEXTGEN compiler:
only the linking part may be diverse. Since they already add Linux support
years ago with Kylix, I hope they would be able to cross-compile to Linux
platforms.
Feedback is
welcome on our forum, as usual.
You could also enhance and follow the corresponding feature request
ticket. [Less]
|
Posted
about 11 years
ago
by
A.Bouchez
It appears that version 1.25 of Fossil did change the ticket storage
behavior:
Enhancements to ticket processing. There are now two tables: TICKET and
TICKETCHNG. There is one row in TICKETCHNG for each ticket artifact. Fields
from ticket artifacts
... [More]
go into either or both of TICKET and TICKETCHNG,
whichever contain matching column names. Default ticket edit and viewing
scripts are updated to use TICKETCHNG.
As stated by the official
Fossil Change Log.
It appears that it just broke existing reports, so we had troubles with
display of the ticket on our site.
Since we managed to find a workable solution, we would like to share it on
our blog, to save other users time!
Here is a request which would for instance display the "Feature Request"
tickets, with a description field which would be either the old "comment",
either from the new TICKETCHNG table:
SELECT
CASE WHEN status IN ('Open','Verified') THEN '#f2dcdc'
WHEN status='Review' THEN '#e8e8e8'
WHEN status='Fixed' THEN '#cfe8bd'
WHEN status='Tested' THEN '#bde5d6'
WHEN status='Deferred' THEN '#cacae5'
ELSE '#c8c8c8' END AS 'bgcolor',
substr(tkt_uuid,1,10) AS '#',
datetime(ticket.tkt_mtime) AS 'mtime',
datetime(min(ticketchng.tkt_mtime)) AS 'ttime',
type,
status,
subsystem,
title,
coalesce(comment,icomment) AS '_Remarks'
FROM ticket
LEFT JOIN ticketchng ON ticket.tkt_id=ticketchng.tkt_id
WHERE status='Open' AND type='Feature_Request'
GROUP BY ticket.tkt_id
ORDER BY ticket.tkt_mtime
This is what is used in
our source code repository.
By the way, we did enhance the mORMot official project
RoadMap to list only the main features on which we are currently working,
then push all feature request to-do list as corresponding tickets.
Our future may sound more easy to guess now.
Feedback is welcome in our forum, as usual! [Less]
|
Posted
about 11 years
ago
by
A.Bouchez
Since most CRUD operations are centered within the scope of our
mORMot server, we implemented in the ORM an integrated mean of
tracking changes (aka Audit Trail) of any TSQLRecord.
In short, our ORM is transformed into a time-machine, just like the
... [More]
good old
DeLorean!
Keeping a track of the history of business objects is one very common need
for software modeling, and a must-have for any accurate data modeling,
like Domain-Driven Design.
By default, as expected by the OOP model, any change to an object will forget
any previous state of this object. But thanks to mORMot's exclusive
change-tracking feature, you can persist the history of your objects.
Enabling audit-trail
By default, change-tracking feature will be disabled, saving performance and
disk use.
But you can enable change tracking for any class, by calling the following
method, on server side:
aServer.TrackChanges([TSQLInvoice]);
This single line will let aServer: TSQLRestServer monitor all
CRUD operations, and store all changes of the TSQLInvoice table
within a TSQLRecordHistory table. Since all content change will be stored in this single table by default
(note that the TrackChanges() method accepts an array of
classes as parameters, and can be called several times), it may be handy
to define several tables for history storage. Later on, an external database
engine may be defined to store history, e.g. on cheap hardware (and big hard
drives), whereas your may database may be powered by high-end hardware (and
small SSDs) - see External database
access.
To do so, you define your custom class for history storage, then supply it as
parameter:
type
TSQLRecordSecondaryHistory = class(TSQLRecord);
(...)
aServer.TrackChanges([TSQLInvoice],TSQLRecordSecondaryHistory);
Then, all history will be stored in this
TSQLRecordSecondaryHistory class (in its own table named
SecondaryHistory), and not the default
TSQLRecordHistory class (in its History table).
A true Time Machine for your objects
Once the object changes are tracked, you can later on browse the history of
the object, by using the TSQLRecordHistory.CreateHistory(), then
HistoryGetLast, HistoryCount, and
HistoryGet() methods:
var aHist: TSQLRecordSecondaryHistory;
aInvoice: TSQLInvoice;
aEvent: TSQLEvent;
aTimeStamp: TModTime;
(...)
aInvoice := TSQLInvoice.Create;
aHist := TSQLRecordSecondaryHistory.CreateHistory(aClient,TSQLRecordPeopleExt,400);
try
writeln('Number of items in the record history: ',aHist.HistoryCount);
for i := 0 to aHist.HistoryCount-1 do begin
aHist.HistoryGet(i,aEvent,aTimeStamp,aInvoice);
writeln;
writeln('Event: ',GetEnumName(TypeInfo(TSQLEvent),ord(aEvent))^);
writeln('TimeStamp: ',TTimeLogBits(TimeStamp).ToText);
writeln('Value: ',aInvoice.GetJSONValues(true,true,soSelect));
end;
finally
aHist.Free;
aInvoice.Free;
end;
As a result, our ORM is also transformed into a true time machine,
for the objects which need it.
This feature will be available on both client and server sides, via the
TSQLRecordHistory table.
Automatic history packing
This TSQLRecordHistory class will in fact create a
History table in the main database, defined as such:
TSQLRecordHistory = class(TSQLRecord)
(...)
published
/// identifies the modified record
property ModifiedRecord: PtrInt read fModifiedRecord;
/// the kind of modification stored
property Event: TSQLEvent read fEvent;
/// for seAdd/seUpdate, the data stored as JSON
property SentDataJSON: RawUTF8 index 4000 read fSentData;
/// when the modification was recorded
property TimeStamp: TModTime read fTimeStamp;
/// after some events are written as individual SentData content, they
// will be gathered and compressed within one BLOB field
property History: TSQLRawBlob read fHistory;
end;
In short, any modification via the ORM will be stored in the
TSQLRecordHistory table, as a JSON object of the changed fields,
in TSQLRecordHistory.SentDataJSON.
By design, direct SQL changes are not handled. If you run some SQL
statements like DELETE FROM ... or UPDATE ... SET ...
are executed within your application or from any external program, then the
History table won't be updated.
In fact, the ORM does not set any DB triggers to track low-level changes: it
would slow down the process, and void the persistence agnosticism
paradigm we want to follow, e.g. allowing to use a NoSQL database like MongoDB.
When the history grows, the JSON content may become huge, and fill the disk
space with a lot of duplicated content. In order to save disk space, when a
record reaches a define number of JSON data rows, all this JSON content is
gathered and compressed into a BLOB, in
TSQLRecordHistory.History.
You can force this packing process by calling
TSQLRestServer.TrackChangesFlush() manually in your code. Calling
this method will also have a welcome side effect: it will read the actual
content of the record from the database, then add a fake seUpdate
event in the history if the field values do not match the one computed from
tracked changes, to ensure that the audit trail will be correct. As a
consequence, history will become always synchronized with the actual data
persisted in the database, even if external SQL did by-pass the CRUD methods of
the ORM, via unsafe DELETE FROM ... or UPDATE ... SET
... statements.
You can tune how packing is defined for a given TSQLRecord
table, by using some optional parameters to the registering method:
procedure TrackChanges(const aTable: array of TSQLRecordClass;
aTableHistory: TSQLRecordHistoryClass=nil; aMaxHistoryRowBeforeBlob: integer=1000;
aMaxHistoryRowPerRecord: integer=10; aMaxUncompressedBlobSize: integer=64*1024); virtual;
Take a look at the documentation of this method (or the comments in its
declaration code) for further information.
Default options will let TSQLRestServer.TrackChangesFlush() be
called after 1000 individual TSQLRecordHistory.SentDataJSON rows
are stored, then will compress them into a BLOB once 10 JSON rows are available
for a given record, ensuring that the uncompressed BLOB size for a single
record won't use more than 64 KB of memory (but probably much less in the
database, since it is stored with very high compression rate).
Feedback is welcome on our
forum, as usual. [Less]
|
Posted
about 11 years
ago
by
A.Bouchez
Since
there was recently some articles about performance comparison between
several versions of the Delphi compiler, we had to react, and gives our
personal point of view.
IMHO there won't be any definitive statement about this.
I'm always doubtful
... [More]
about any conclusion which may be achieved with such kind
of benchmarks.
Asking "which compiler is better?" is IMHO a wrong question.
As if there was some "compiler magic": the new compiler will be just like a new
laundry detergent - it will be cleaner and whiter...
Performance is not about marketing.
Performance is an iterative process, always a matter of circumstances,
and implementation.
Circumstances of the benchmark itself.
Each benchmark will report only information about the process it
measured.
What you compare is a limited set of features, running most of the time an
idealized and simplified pattern, which shares nothing with real-world
process.
Implementation is what gives performance.
Changing a compiler will only gives you some percents of time change.
Identifying the true bottlenecks of an application via a profiler, then
changing the implementation of the identified bottlenecks may give order of
magnitudes of speed improvement.
For instance, multi-threading abilities can be achieved by following
some simple rules.
With our huge set of regression tests, we have at hand more than 16,500,000
individual checks, covering low-level features (like numerical and text
marshaling), or high-level process (like concurrent client/server and database
multi-threaded process).
You will find here some benchmarks run with Delphi 6, 7, 2007, XE4 and XE6
under Win32, and XE4 and XE6 under Win64.
In short, all compilers performs more or less at the same speed.
Win64 is a
little slower than Win32, and the fastest appears to be Delphi 7, using our
enhanced and
optimized RTL.
Delphi 6 compiler
Time elapsed for all tests: 35.38s
Delphi 7 compiler (with our enhanced RTL)
Time elapsed for all tests: 34.79s
Delphi 2007 compiler
Time elapsed for all tests: 36.04s
Delphi XE4 compiler
Time elapsed for all tests: 38.09s
Delphi XE6 compiler
Time elapsed for all tests: 37.53s
Delphi XE4 64 bit compiler
Time elapsed for all tests: 41.40s
Delphi XE6 64 bit compiler
Time elapsed for all tests: 40.87s
You can find details about those regression tests, as mORMot regression tests
text reports.
Or, even better, you can run all tests by
yourself.
This is not a definitive answer.
In short, for most real process, the Delphi compiler did not improve the
execution speed.
On the contrary, we may say that the generated executables are slightly slower
with newer versions.
The compiler itself is perhaps not the main point during our tests, but the
RTL, which was not modified with speed in mind since Delphi 2010.
Even if mORMot code by-passes the RTL for most of its process, we
can still see some speed regressions when compared to pre-Unicode versions of
Delphi.
In some cases, the generated asm is faster since Delphi 2007, mainly due to
function inlining abilities.
But we can't say that the Delphi compiler do generate much better code in newer
versions.
And we can assure you that the RTL is a true bottlenck: from our experiment,
Win64 process is only slightly slower than Win32, due to the fact that we
by-pass the RTL, and use our own set of low-level routines (including optimized
x64 asm in SynCommons.pas).
When testing the FreePascal
compiler, we found out that its generated code is slightly slower than
Delphi.
Floating-point is much faster with FreePascal than with Delphi, but
for common code (like our framework regression tests), FreePascal is slightly
less efficient than Delphi.
But still perfectly usable in production, generating smaller executables, and
with better abilities to cross-platform support, and a tuned RTL.
So, is it worth upgrading?
Are newer versions of Delphi worth the price?
To be fair... Delphi compiler did not improve much since 10 years...
But just like GCC or other compilers!
The only dimension where performance did improve in order of magnitude is for
floating-point process, and auto-vectorisation of the code, using SSE
instructions. But for business code (like database or client-server process),
the main point is definitively not the compiler, but the algorithm. The
hardware did improve a lot (pipelining, cache, multi-core....), and is the main
improvement axis.
Feedback is welcome on our forum, as
usual. [Less]
|
Posted
about 11 years
ago
by
A.Bouchez
We got a very instructive
discussion in our forums, with Silvio, the maintainer of the
Brook Framework.
Brook is a nice framework for writing web applications using Free
Pascal.
It comes to my mind what mORMot can offer.
We did not want to compare
... [More]
the features or say that one framework is better
than the other, but it appeared to me that a lot of object pascal programmers
are tied to 20th century programming model.
In fact, to embrace the potentials of mORMot, you need to switch
your mind, and enhanced your RAD and OOP background, into 21th century SOLID model. Just say that if you want to implement a known design pattern like Domain Driven Design, you
will have all needed bricks available with mORMot to focus on
modeling, and you will have to write much more code with Brook.
"Convention over
configuration" means that web services, HTTP and REST are means, not
goals.
The conventions available in mORMot allow to write some code without
any knowledge of what a GET/POST/PUT is, or how routing is handled.
And if you need to tune the default behavior, you still can.
Silvio get it right: most of the complexity of the mORMot internal
core comes from the "conventional" approach and "abstraction to technical
details".
And, to be honest, the other main feature which included complexity in the
implementation was our goal to performance and multithread friendliness.
What Silvio called "bureaucracy" in his post is
that modern serious coding (e.g. DDD) is to uncouple your logic from the
technical details by which it is implemented.
It is not "bureaucracy", it is 21th century software design.
For instance, your business logic code should be uncoupled from implementation
details like transport, security, persistence, marshaling.
This is all about the SOLID principles, and rely on abstraction.
IMHO interface support, dependency injection, and are
mandatory for modern business project.
So that you can stub/mock any
part of your application (DB, transport...) and maintain/test it.
It is mandatory for test-driven approach, and serious modern programming.
In short, you have several level of quality coding:
RAD approach, which mixes UI and logic/persistence with
components;
OOP approach, which try to uncouple the tiers with classes;
SOLID approach, which rely on abstraction and uncoupling at all levels,
with interfaces.
Brook is a clean OOP solution, but not SOLID.
SOLID design may be something new, as it is for most pascal
developers.
If you was involved in serious software development in C# (or Java), as I was,
you are perhaps already familiar with this set of design principles.
You may think SOLID is not worth it, and call it "bureaucracy" or "marketing
stuff".
BTW, this is the reason why there is some mandatory design principles chapters
in the mORMot documentation - take a look at the SAD
1.18 pdf.
And what Nick Hodges tried to introduce Delphi programmers in his latest
book, "Coding in
Delphi".
I still do not need generics or attributes as
implemented in modern Delphi (which IMHO pollute the code), and find in FPC and
Delphi 7/2007 syntax all that I need to write SOLID code.
But Nick advocates the same principles that drove mORMot's
architecture, mainly DI and TDD.
I can assure you that SOLID is much more "practical" than regular OOP
design.
My point of view is that mORMot SOLID design let you be much more
productive than a regular OOP design, as offered by Brook.
Just try to write e.g. 10 small SOA services and consume them with clients in
both frameworks, unit test them, and you will find out what I mean...
Thanks to Delphi
interface features, you can write real SOLID code, and still
benefit of object pascal strengths.
Brook, and even the current state of the FCL, or even the Delphi
RTL, are just not able to follow the SOLID patterns, out of the box.
They only offer sets of classes, gathered by units, to do amazing thinks.
But you need external tools (like Spring4D or mORMot)
to directly implement SOLID patterns, i.e. use interfaces with ease (dependency
injection) and safety (weak reference pointers).
If I make mORMot compatible with Free Pascal, I guess it will even add
a lot of features to the FCL, e.g. stubbing/mocking and such.
It could benefit to the community!
Ensure you took a look at the slides
we shared about all those design principles.
By the way, thanks Bill for the review!
Feedback is welcome in our forum, as
usual!
Any help for porting mORMot to FPC, and gives us some motivation is
welcome. [Less]
|
Posted
over 11 years
ago
by
A.Bouchez
Cyclic Redundancy Check (CRC) codes are widely used for integrity checking
of data in fields such as storage and networking.
There is an ever-increasing need for very high-speed CRC computations on
processors for end-to-end integrity checks.
We just
... [More]
introduced to mORMot's core unit
(SynCommons.pas) a fast and efficient
crc32c() function.
It will use either:
Optimized x86 asm code, with unrolled loops;
SSE 4.2 hardware crc32 instruction, if available.
Resulting speed is very good.
This is for sure the fastest CRC function available in Delphi.
Note that there is a version dedicated to each Win32 and Win64 platform - both
performs at the same speed!
In fact, most popular file formats and protocols (Ethernet, MPEG-2, ZIP,
RAR, 7-Zip, GZip, and PNG) use the polynomial $04C11DB7, while
Intel's hardware implementation is based on another polynomial,
$1EDC6F41 (used in iSCSI and Btrfs).
So you would not use this new crc32c() function to
replace the zlib's crc32() function, but as a
convenient very fast hashing function at application level.
For instance, our TDynArray wrapper will use it for fast items
hashing. Here are some speed result, run on a Core i7 notebook.
We did hash 10000 random strings, from 1 to 1250 chars long.
Our optimized unrolled x86 version - aka crc32cfast() -
performs the test at a very good pace of 1.7 GB/s;
SSE 4.2 version - aka crc32csse42() - gives an amazing 3.7
GB/s speed (on both Win32 and Win64 platforms);
simple rolled version of the algorithm (similar to the one in Delphi
zlib unit) runs at 330 MB/s.
For comparison, on the same random content:
Our optimized unrolled kr32() function (i.e. the standard
Kernighan & Ritchie hash taken from "The C programming Language",
3rd edition) hashes at 898.8 MB/s;
Our simple proprietary Hash32() function runs
at 2.5 GB/s, but with much more collisions.
Feedback and numbers is welcome in our forum, as
usual! [Less]
|
Posted
over 11 years
ago
by
A.Bouchez
Since Delphi 2010, the compiler generates additional RTTI at compilation, so
that all record fields are described, and available at
runtime.
By the way, this enhanced RTTI is one of the reasons why executables did grow
so much in newer versions of
... [More]
the compiler.
Our SynCommons.pas unit is now able to use this enhanced
information, and let any record be serialized via
RecordLoad() and RecordSave() functions, and all
internal JSON marshalling process.
In short, you have nothing to do.
Just use your record as parameters, and, with Delphi 2010 and up,
they will be serialized as valid JSON objects.
Of course, text-based
definition or callback-based registration are still at hand, and will be
used with older versions of Delphi.
But you could be used to by-pass or extend the enhanced-RTTI serialization,
even on newer versions of the compiler. Enhanced RTTI support for records and dynamic arrays was added by this commit.
The documentation has been enhanced in synch!
Please ensure that you downloaded the latest SAD 1.18 pdf revision!
Serialization for older Delphi versions
Sadly, the information needed to serialize a record is
available only since Delphi 2010.
If your application is developped on any older revision (e.g. Delphi 7,
Delphi 2007 or Delphi 2009), you won't be able to automatically serialize
records as plain JSON objects directly.
You have several paths available:
By default, the record will be serialized as binary, and
encoded as Base64 text;
Or you can define method callbacks which will write or read the data as you
expect;
Or you can define the record layout as plain text.
Note that any custom serialization (either via callbacks, or via text
definition), will override any previous registered method, even the mechanism
using the enhanced RTTI.
You can change the default serialization to easily meet your requirements.
For instance, this is what SynCommons.pas does for any
TGUID content, which is serialized as the standard JSON text
layout (e.g. "C9A646D3-9C61-4CB7-BFCD-EE2522C8F633"), and not
following the TGUID record layout as defined in the RTTI , i.e.
"D1":12345678,"D2":23023,"D3":9323,"D4":"0123456789ABCDEF" - which
is far from convenient.
U [Less]
|