Slyck.com
 
Slyck Chatbox - And More

New edonkey client - Lphant Lite 2.01

Discussion about the eDonkey program and both the eDonkey2000 network and the Overnet Network.
Forum rules
PLEASE READ BEFORE POSTING: Slyck Forum Rules

New edonkey client - Lphant Lite 2.01

Postby SharedHolder » Tue May 09, 2006 2:29 am

Lphant Lite 2.01

Lphant Lite is an overall complete solution to find, download and share any file. Lphant Lite allows you to download and share files of any kind or size with millions of other people using the eDonkey network, where more than 25 millions of files are available or download any of the millions of torrent files you can find in the Net. With Lphant Lite you will be able to download virtually any content you wish. Lphant Lite is fast an easy to use with the most sophisticated file sharing technology available. Lphant Lite is completely adware/spyware free. If you have previously installed a non-clean version of Lphant, you are advised to uninstall it and run Lavasoft Ad-aware before installing this version.


http://lphant-lite.atspace.com/

Anyone have tried Lphant ?
User avatar
SharedHolder
 
Posts: 165
Joined: Mon Jan 05, 2004 8:23 am

Re: New edonkey client - Lphant Lite 2.01

Postby P2P_G » Tue May 09, 2006 9:04 am

SharedHolder wrote:Lphant Lite 2.01

Lphant Lite is an overall complete solution to find, download and share any file. Lphant Lite allows you to download and share files of any kind or size with millions of other people using the eDonkey network, where more than 25 millions of files are available or download any of the millions of torrent files you can find in the Net. With Lphant Lite you will be able to download virtually any content you wish. Lphant Lite is fast an easy to use with the most sophisticated file sharing technology available. Lphant Lite is completely adware/spyware free. If you have previously installed a non-clean version of Lphant, you are advised to uninstall it and run Lavasoft Ad-aware before installing this version.


http://lphant-lite.atspace.com/

Anyone have tried Lphant ?


Lphant Lite is a kind of rip of Lphant original, Lphant was kind of good, it gave me good speeds but i havent trye'd BT on it :P and it takes much ram..
Piracy Will Never Die.
User avatar
P2P_G
 
Posts: 2783
Joined: Sat Dec 03, 2005 7:27 pm

Postby king8654 » Tue May 09, 2006 9:54 am

Just installed...get nice little error..Psshhuk

Ill stick with uTorrent :twisted:
User avatar
king8654
 
Posts: 1087
Joined: Thu Oct 28, 2004 10:56 am

Postby P2P_G » Tue May 09, 2006 11:03 am

king8654 wrote:Just installed...get nice little error..Psshhuk

Ill stick with uTorrent :twisted:


Yes uTorrent has better BT suport ;)
Piracy Will Never Die.
User avatar
P2P_G
 
Posts: 2783
Joined: Sat Dec 03, 2005 7:27 pm

Postby SharedHolder » Tue May 09, 2006 11:32 am

king8654 wrote:Just installed...get nice little error..Psshhuk

Ill stick with uTorrent :twisted:


Maybe because you don't have the Net.Framework installed. Lphant Lite is working fine here and i get better speeds on the edonkey network than using edonkey 1.4.3b. :lol:
User avatar
SharedHolder
 
Posts: 165
Joined: Mon Jan 05, 2004 8:23 am

Postby king8654 » Tue May 09, 2006 11:37 am

I dont doubt its a good program, seems pretty good.

Yes i do have the net.framework, just split the installation into two folders...(dont ask how)

huge uTorrent fan so ill stick, but good luck with this
User avatar
king8654
 
Posts: 1087
Joined: Thu Oct 28, 2004 10:56 am

Postby Trev0r269 » Tue May 09, 2006 12:28 pm

LPhant won't be as good with the torrents as any dedicated torrent client.

I'm glad theres a lite version of this. The crapware in it turned a lot of people away...that and the fact its coded in .net (C#?)
Trev0r269
 
Posts: 584
Joined: Mon Feb 23, 2004 7:53 pm
Location: Slow - hio

Postby Allied » Tue May 09, 2006 2:17 pm

How can it be a new client if it 2.01?

That description sucks. you could replace the word Lphant with Shareaza or eDonkey2000 without missing a step.
Allied's Review:
Recommended: LimeWire | Ares | Shareaza | eMule | KCeasy
Not Recommended: Morpheus | Kazaa | eDonkey2000 | Manolito | iMesh
User avatar
Allied
Mostly Harmless
 
Posts: 2170
Joined: Sat Aug 14, 2004 11:23 pm
Location: Behind You Shoe Size: 11.008 BitTorrent: µTorrent Nationality: Canuckian Newsgroups: GrabIt

Postby zbeast » Tue May 09, 2006 2:48 pm

I wish that people would stop trying to write application that are going to be used by the general public in .net.
.net sucks. it's not portable and its very hard to code cross platform code using it.
No I'm not saying use java but I am saying use C++ stop using M$'s stupid network libraries.
Take the time learn and use the STL and stop trying to embed MFC and .net specific calls.

Gurrrrr!
zbeast
 
Posts: 6783
Joined: Thu Apr 01, 2004 5:20 pm
Location: Behind that itch. yes, that one right there.

Zbeast

Postby Djgaz1 » Tue May 09, 2006 2:57 pm

Calm down Zbeast, dont get in a paddy now, it doesn't become you, lol...
User avatar
Djgaz1
 
Posts: 250
Joined: Wed Aug 31, 2005 4:50 am

Postby IceCube » Tue May 09, 2006 6:47 pm

While on the topic of .net. Ever since I installed the frameworks, I've been having to use a stupid log-in screen with only one account - mine. Anyway to make Windows just auto-log-in into my account so I don't have to keep going through an extra step every time my screensaver decides to kick in?
User avatar
IceCube
 
Posts: 17079
Joined: Tue Jun 14, 2005 5:31 pm
Location: Igloo Country?

Postby Inksaver » Tue May 09, 2006 7:23 pm

IceCube wrote:While on the topic of .net. Ever since I installed the frameworks, I've been having to use a stupid log-in screen with only one account - mine. Anyway to make Windows just auto-log-in into my account so I don't have to keep going through an extra step every time my screensaver decides to kick in?


This is a fault of the original .net 1 framework. It usually resolves if you get the service pack for .net from windows update
Inksaver
 
Posts: 221
Joined: Thu Jul 29, 2004 8:16 am

Postby tarp404 » Wed May 10, 2006 3:07 am

zbeast wrote:I wish that people would stop trying to write application that are going to be used by the general public in .net.
.net sucks. it's not portable and its very hard to code cross platform code using it.
No I'm not saying use java but I am saying use C++ stop using M$'s stupid network libraries.
Take the time learn and use the STL and stop trying to embed MFC and .net specific calls.

Gurrrrr!

Amen to that.
what what in the butt
User avatar
tarp404
 
Posts: 375
Joined: Sun Feb 20, 2005 4:31 pm

Postby zim » Wed May 10, 2006 1:44 pm

LPhant looks pretty damm good in emule.

i've seen quite a few lp clients show up lately. good speeds up and down.

they arent leeches like the damm shareaza ed2k is either.
User avatar
zim
 
Posts: 5776
Joined: Wed Apr 20, 2005 10:01 am

Postby IneptVagrant » Wed May 10, 2006 7:58 pm

zbeast wrote:I wish that people would stop trying to write application that are going to be used by the general public in .net.
.net sucks. it's not portable and its very hard to code cross platform code using it.
No I'm not saying use java but I am saying use C++ stop using M$'s stupid network libraries.
Take the time learn and use the STL and stop trying to embed MFC and .net specific calls.

Gurrrrr!


The point of .net is to decrease distribution size of applications and decress over all memory footprint(s). Instead of embeding all the various windows procedures into the exe, they are instead in the .net package, and you can just link to it via dll(s). And multiple exes can call the same dll(s) so you don't have to have several copies loaded. You can compile w/o dependencies to .net but than you wouldn't have this 'advantage'.

.net wasn't the first to do this, Delphi (proably also wasn't the first) from Borland has been doing it for a long time but the people that code in Delphi don't expect users to have the packages and so distrubute them with the software just in case.

Ultimately, if everything was written in .net, then much less memory would be in use during runtime on avg, and less diskspace would be required. In reality, I agree .net is annoying but I wouldn't go so far as to say it shouldn't be used. No programer considering portability should use .net, but then whos to say "LPhant" was considering being protable.
IneptVagrant
 
Posts: 1247
Joined: Tue Nov 15, 2005 5:07 am
Location: close the world . . . . . . . . . . . . . . txEn eht nepO

Postby liberator » Wed May 10, 2006 8:42 pm

.NET is NOT a network library.
If you use C++ and code for any OS how are you going to draw on the screen for example without external OS specific libraries???
I guess Mono and DotGnu have alrady proven .NET is portable enough...

Seems like zbeast has a big mess in his head regarding .NET which produces strong feeling. If you have more questions ... google them.
liberator
 
Posts: 92
Joined: Sat Dec 17, 2005 10:26 pm

Postby Xor » Wed May 10, 2006 10:26 pm

IneptVagrant wrote:The point of .net is to decrease distribution size of applications and decress over all memory footprint(s). Instead of embeding all the various windows procedures into the exe, they are instead in the .net package, and you can just link to it via dll(s). And multiple exes can call the same dll(s) so you don't have to have several copies loaded. You can compile w/o dependencies to .net but than you wouldn't have this 'advantage'.

programs can link to the same dll's just fine without the net framework.

IneptVagrant wrote:Ultimately, if everything was written in .net, then much less memory would be in use during runtime on avg, and less diskspace would be required.

the NET runtime adds alot of memory overhead per running application aswell. it's NOT a one time penalty, since garbage collecting and jit-compiling needs added memory space per running application.

native programs already _dynamically_link_ to the SAME system dll's, like kernel32.dll, ole32.dll, comctl32.dll, shell32.dll, crtdll.dll etc.. unless you use static linking, so what is your point exactly?

I have nothing against NET, it's perfect for certain projects, it's very easy to program in, and if resource usage is of no concern then it is great for pretty much anything. however, if small memory footprint is a preference, then NET is not a good solution, not even if ALL programs were running under NET.
Xor
 
Posts: 11
Joined: Wed May 10, 2006 9:38 pm

Postby liberator » Wed May 10, 2006 11:33 pm

Xor wrote:the NET runtime adds alot of memory overhead per running application aswell. it's NOT a one time penalty, since garbage collecting and jit-compiling needs added memory space per running application.


Oh I have a question! Since jit-compiling seems to be a NOT-one-time penalty how often does it occur??? And if I have 100k managed app how big will the native image be then (supposedly created over and over and not just once)? Since we are talking A LOT OF MEMORY OVERHEAD I expect it to be in the gigabytes...
Also I thought garbage collection actually frees memory...correct me!
liberator
 
Posts: 92
Joined: Sat Dec 17, 2005 10:26 pm

Postby Xor » Thu May 11, 2006 1:12 am

liberator wrote:
Xor wrote:Oh I have a question! Since jit-compiling seems to be a NOT-one-time penalty how often does it occur???


true, the jit-compiler only adds overhead per-application if it jit-compiles on a function basis, but I gather most jit-compilers nowadays compile per-file, and thus will only be called upon initial program execution, and also won't need to keep the bytecode/msil in memory.


liberator wrote:And if I have 100k managed app how big will the native image be then (supposedly created over and over and not just once)? Since we are talking A LOT OF MEMORY OVERHEAD I expect it to be in the gigabytes...
depends on how much the garbage collector needs to keep track of, "alot of memory overhead" is in relation to a native application doing the same thing.


liberator wrote:Also I thought garbage collection actually frees memory...correct me!

yes, the garbage collector retrieves memory, but this is not done by magic. it's done by trying to determine when an object will no longer be accessable/accessed, this means that the garbage collector has to keep track of all objects being referenced, so that during the collection cycle (usually triggered by low memory availability) it can regain those objects that are unreachable. it requires both memory and cpu resources to keep track of all these objects (hence overhead), this is not some magic algoritm. also, apart from unreachable objects (syntatic garbage) we also have those that are reachable but will no longer be used (semantic garbage). to catch these objects is alot harder for the garbage collector and thus they usually linger long after they fall out of scope, wasting memory.

compare this to managed memory collection, where the programmer knows exactly when memory has served it's purpose and can be reclaimed. no external process is required to monitor the objects state and keep track of their scope of existance.

garbage collecting is nothing new, its been around since Lisp, sometimes in the 60's iirc. that said, it's made rapid progress during the last couple of years. it was quite some time since I read up on garbage collectors, so likely there's been alot of progress. but no matter how much you optimize a garbage collector, it will never come near the efficiency of managed memory collection.

then again, thats not the point of it, the reason it exists is to make programming easier and less error prone due to programmers forgetting to free allocated memory and/or referencing dead pointers.

if you can afford the aforementioned overhead, then there is no reason not to use it, but there IS alot of overhead in terms of memory/cpu usage _COMPARED_ to manually memory managed native applications.

for a real-life p2p comparison, you could look at the resource usage of azareus compared to native applications such as utorrent, bitcomet etc.
Xor
 
Posts: 11
Joined: Wed May 10, 2006 9:38 pm

Postby liberator » Thu May 11, 2006 2:26 am

A .NET object comes with 12 bytes of overhead. Technically 4 bytes (method table pointer)+4 bytes (sync lock) but Microsoft seem to keep another 4 bytes reserved. So if you have 1 mln object you get 12Mb overhead. Why are you having 1 mln objects in the first place??? Value types,instead, have no overhead. Let's say the GC keeps some more data per object (reference counting, generations) worse case scenario, in this extremely badly designed app with 1 mln live objects, you get another 10-20 Mb overhead compared to native. Doesn't seem that much to me at all. As a matter of fact I'm now looking at a .NET app of mine. It's doing some complex statistical operations. It's running and consuming 39Mb of memory. Out of this 15Mb is a data cache allocated by me on purpose. Just to give a measure of the data capacity. It's been running for several hours now and it never goes too much above this value.

As for the CPU usage of the GC it's very fast anyway. Memory is fast and almost never a problem. Still there are some interesting questions. What if the GC ran only when the CPU is idle? What if it ran on another CPU or another core? As a matter of fact idle-time use is a common technique in native apps especially on MFC. They clean up in the OnIdle() event. By all means and purposes this is Garbage Collection. The ed2k client emule does it, for example...

Comparing Azureus to the rest - I didn't write either of the three and the native ones you mentioned are not even Open Source. I also don't think it's the aim of Azureus to conserve memory. It's been running on my PC for 7 days and is consuming 137Mb right now. I have 1.5Gb of memory. I could run 10 of it side-by-side. Doesn't bother me at all.
liberator
 
Posts: 92
Joined: Sat Dec 17, 2005 10:26 pm

Postby Xor » Thu May 11, 2006 4:50 am

liberator wrote:in this extremely badly designed app with 1 mln live objects, you get another 10-20 Mb overhead compared to native. Doesn't seem that much to me at all.

compared to what? if you don't mind 10-20 mb overhead then your application definately isn't performance/resource critical, and in such a case there is no reason not to use managed code. however, if speed and resource usage is of importance, stick to native code.

liberator wrote:As a matter of fact I'm now looking at a .NET app of mine. It's doing some complex statistical operations. It's running and consuming 39Mb of memory. Out of this 15Mb is a data cache allocated by me on purpose. Just to give a measure of the data capacity. It's been running for several hours now and it never goes too much above this value.

these numbers mean nothing unless I can see the program measured against an equivalent in native code.

liberator wrote:As for the CPU usage of the GC it's very fast anyway.

again it's all relative, on a decent machine, managed code is definately not a problem, however, a native code version using manual memory handling would always be less resource hungry, and depending on the application, the different could be significant.

liberator wrote:Memory is fast and almost never a problem. Still there are some interesting questions. What if the GC ran only when the CPU is idle? What if it ran on another CPU or another core? As a matter of fact idle-time use is a common technique in native apps especially on MFC. They clean up in the OnIdle() event. By all means and purposes this is Garbage Collection. The ed2k client emule does it, for example...
I'm sure modern garbage collectors use every trick in the book, and are very fast for what they do, that doesn't change the fact that they add significant overhead. the garbage collector is an advanced process that needs to perform alot of logic to determine what parts of allocated storage can be reclaimed. only when this is done can it call the appropriate memory heap functionality to return memory. manual memory managment reaches this point directly, since the programmer decides when and where to reclaim memory. and yes, that includes waiting for an event or message or whatever else you use to trigger your manual memory handling.


liberator wrote:Comparing Azureus to the rest - I didn't write either of the three and the native ones you mentioned are not even Open Source.

don't quite see what open source or not has to do with resource usage? if you don't want to use programs that are not open source, then I can understand that viewpoint. but it is outside the scope of this discussion.

liberator wrote:I also don't think it's the aim of Azureus to conserve memory. It's been running on my PC for 7 days and is consuming 137Mb right now. I have 1.5Gb of memory. I could run 10 of it side-by-side. Doesn't bother me at all.

and you could likely run 130 utorrent programs side-by-side then. what is your point? I have a p2400, 1gb ram, not a supermachine but definately sufficient for running azareus or any other managed code project. however, I have never felt that I had too much RAM. I usually run 6-12 applications at once, most of them are cpu/memory intensive, such as 3D programs, encoding, compression etc. these programs are resource hungry by their very nature, but are still very efficient for the task they perform (in short, they try to avoid as much overhead as possible). azareus (to make an example) is NOT efficient for the task it performs, as proved by comparison with native bittorrent clients. and for me personally to use 130 mb to run azareus as opposed to use 9-12mb with utorrent and spend the rest of the ~120mb on those aforementioned resource-hungry-by-nature programs is a no-brainer for me.

now, if I just used my computer to download torrents, watch some movies and browsing the web then using azureus doesn't really matter. but it doesn't make it any more efficient.

again, I'm not picking a fight here, I have nothing against managed code and would gladly use it for projects where resource usage is not a major concern.
Xor
 
Posts: 11
Joined: Wed May 10, 2006 9:38 pm

Postby liberator » Thu May 11, 2006 8:14 pm

Xor wrote:
liberator wrote:in this extremely badly designed app with 1 mln live objects, you get another 10-20 Mb overhead compared to native. Doesn't seem that much to me at all.

compared to what? if you don't mind 10-20 mb overhead then your application definately isn't performance/resource critical, and in such a case there is no reason not to use managed code. however, if speed and resource usage is of importance, stick to native code.


So I'm sitting here in a remote session watching my Apache idling on my dev server. Only 8 ppl in the world know the address so I guarantee you there is no load whatsoever. The Apache is doing nothing right now and obviously I'm not giving it any work as I'm writing here instead of working. Meanwhile it is happily consuming 40Mb of memory. Surely you won't say that Apache is a resouce hog or badly written, seeing it runs most of the Internet. Yet those 40Mb right now are for all practical purposes "unnecessary" overhead.

Xor wrote:again it's all relative, on a decent machine, managed code is definately not a problem, however, a native code version using manual memory handling would always be less resource hungry, and depending on the application, the different could be significant.


And that's exactly where the big problem lies. You obviously have a user-land POV. But a P2P application is certainly not a typical user-land app. And when we talk about high performance apps kernel-land concepts are much more important like context switches, user-kernel transitions, caching, io and of course the mother-of-it-all paging/virtual memory. A server must not page! So let me ask you a question? When you write your deterministic new and delete statements in c++ do you think about heap fragmentation? Have you written your own versions of them with regards to more efficient memory management in high-performance scenarios e.g. like outlined in Matt Pietrek's book written 10 years ago for Win95 (!). Do you use specific data structures that are more kind to virtual memory. Does anyone do it? And if you did, as you should, won't you agree that most of the science behind it is what a garbage collector would actually use. There are only so many ways how to do those things...

Of couse when you say "always" you actually mean that a genius programmer writing deterministic memory management will outperform an automated software algorithm I would agree. But show me such programmer! For the record utorrent uses magical tricks for its memory management, not design patterns, so apart from it being closed source I wouldn't accept it as an example.


Xor wrote:again, I'm not picking a fight here, I have nothing against managed code and would gladly use it for projects where resource usage is not a major concern.

I'm not picking a fight either, just testing your science. :)

The myth that native code is very fast and the most efficient is just an excuse to write bad software. It's not C++ that gives you power, it's knowledge of your system. The rest is just hard work.
liberator
 
Posts: 92
Joined: Sat Dec 17, 2005 10:26 pm

Postby IneptVagrant » Thu May 11, 2006 8:39 pm

Xor wrote:native programs already _dynamically_link_ to the SAME system dll's, like kernel32.dll, ole32.dll, comctl32.dll, shell32.dll, crtdll.dll etc.. unless you use static linking, so what is your point exactly?


I wasn't talking about system libaries. I was talking about the .NET class libaries. The point is not to have the code for a tool bar, for a right-click context menu, for a text box, for a window (for any number of other seemingly standard components) in every single program that needs to use one of them. This does not have anything to do with 'system' DLL's.

I wasn't talking about the overhead of JIT, or Garbage collection. I was talking about code replication and reuse.. Why create a new wheel and storage for it, when you can just borrow one.

I compared it to Delphi, as I personally have no expreience programing in .NET. In my experience with Delphi a dependent program requires 600KB (size of exe) less than a program compiled w/o dependencies. I don't even know for sure if you can compile w/o class libary dependencies in .NET, but if you can it would be ez to show that .NET can save space for every program that uses it. NOT nessaccary saving space over all, as obviously the .NET class libaries require a substantion footprint in of themselves.

Finally I made an unstated assumption that there would be a gui for any program writtin in .NET, at the mininum a significant portion would be writen using the class libaraies. Because if you aren't useing the class libaries (while not limited to GUIs) there is not any significant advantage in using .NET.
IneptVagrant
 
Posts: 1247
Joined: Tue Nov 15, 2005 5:07 am
Location: close the world . . . . . . . . . . . . . . txEn eht nepO

Postby Xor » Thu May 11, 2006 10:31 pm

liberator wrote:So I'm sitting here in a remote session watching my Apache idling on my dev server. Only 8 ppl in the world know the address so I guarantee you there is no load whatsoever. The Apache is doing nothing right now and obviously I'm not giving it any work as I'm writing here instead of working. Meanwhile it is happily consuming 40Mb of memory. Surely you won't say that Apache is a resouce hog or badly written, seeing it runs most of the Internet. Yet those 40Mb right now are for all practical purposes "unnecessary" overhead.

no, it is not programmed to preserve memory, it is programmed to serve clients as FAST as possible. so for the task at hand it is very efficient. and a managed code version of it would use more memory and more cpu. again you are totaly missing the point, managed code can't beat an equivalent native program in terms of speed and memory usage. in order to perform as well as possible, apache was written in c.

liberator wrote:When you write your deterministic new and delete statements in c++ do you think about heap fragmentation?

depending on the application of course, if my application uses tons of objects (or structures or just plain data blocks etc), then if performance was of bigger importance than memory consumption I would tailor the program to recycle existing memory without returning it to the heap.

liberator wrote:Have you written your own versions of them with regards to more efficient memory management in high-performance scenarios e.g. like outlined in Matt Pietrek's book written 10 years ago for Win95 (!). Do you use specific data structures that are more kind to virtual memory. Does anyone do it?

yes, at work I have written several memory subsystems tailored after the task at hand. we primarily work under windows (since our customers do) so I'd use VirtualAlloc as the base and then pin on subsystem functionality as needed in order to suit the application as best as possible. if I write an application where memory allocation/deallocation is of no performance concern, I'd stick to HeapAlloc. same goes for networking, if I need to handle tons of connections, I'd go for an io completion ports solution, else I would a simpler io model, like WSAEventSelect. again you are so wrong, there are lots of ways to do these things, even within the boundaries of the native api.

liberator wrote:And if you did, as you should, won't you agree that most of the science behind it is what a garbage collector would actually use. There are only so many ways how to do those things...

the garbage collector returns memory to the heap. it calls the same or atleast equivalent heap functionality that a native program would do when returning memory, but without the overhead of the gc process. do you not see this simple fact? ok, let me give you the simplest analogy I can come up with at short notice:

you purchace a broom (heap memory functionality) so that you can clean up your floor. this would be native code. you decide to spend more money on a maid (garbage collector), so she comes in, takes the broom and cleans it up for you. this would be managed code.

now the maid + broom is more expensive than just the broom, but you no longer have to worry about cleaning, so to you it may be worth the extra cost.

liberator wrote:Of couse when you say "always" you actually mean that a genius programmer writing deterministic memory management will outperform an automated software algorithm I would agree. But show me such programmer! For the record utorrent uses magical tricks for its memory management, not design patterns, so apart from it being closed source I wouldn't accept it as an example.

I'm no genious programmer, but that has no relevance to this discussion.
there is no magic going on here my friend, automatic memory handling requires the computer to do more work and thus use more resources than if done manually. however, not being forced to manually manage memory could definately improve development time, avoid memory oriented bugs, so it's a balance.


liberator wrote:The myth that native code is very fast and the most efficient is just an excuse to write bad software.

native code is faster than managed code, that is no myth. excuse to write bad software? are you saying that native code is bad software?

liberator wrote:It's not C++ that gives you power, it's knowledge of your system. The rest is just hard work.

I definately agree that knowing the system is important for performance, but in that respect managed code is worse because it abstracts even further.
Xor
 
Posts: 11
Joined: Wed May 10, 2006 9:38 pm


Return to eDonkey2000 / Overnet

Who is online

Users browsing this forum: No registered users and 1 guest

© 2001-2008 Slyck.com