KDE and NVidia (updated)        

KDE Project:

The above combination was never a painless experience, still at some point in past it seemed to be better to have a NVidia card on Linux then anything else, so I continued to buy them whenever my system was upgraded. Lately although it started to make me rather bad. I have two computers, one that is a 4 core Intel CPU with 8GB of memory, the other is a Core2Duo with 3GB. The latter is a Lenovo laptop. Both have NVidia, nothing high end (Qudaro NVS something and 9300GE, both used with dual monitor setup), but they should be more than enough for desktop usage. Are they?
Well, something goes wrong there. Is that KDE, is that XOrg, is that the driver? I suspect the latter. From time to time (read: often), I ended up with 100% CPU usage for XOrg. Even though I had 3 cores doing nothing the desktop was unusable. Slow scroll, scroll mouse movements, things typed appearing with a delay, things like that. Like I'd have an XT. I tried several driver version, as I didn't always have this issues, but with newer kernel you cannot go back to (too) old drivers. I googled, and found others having similar experience, with no real solution. A suspicion is font rendering for some (non-aliased) fonts, eg. Monospace. Switching fonts sometimes seemed to make a difference, but in the end, the bug returned. Others said GTK apps under Qt cause the problem, and indeed closing Firefox sometimes helped. But it wasn't a solution. Or there was a suggestion to turn the "UseEvents" option on. This really seemed to help, but broke suspend to disk. :( Turning off the second display and turning on again seemed to help...for a while. Turning off the composite manager did not change the situation.
Finally I tried the latest driver that appeared not so long ago, 256.44. And although the CPU usage of XOrg is still visible, with pikes going up to 20-40%, I gain back the control over the desktop. Am I happy with it? Well, not....
As this was only my desktop computer. I quickly updated the driver on the laptop as well, and went on the road. Just to see 100% CPU usage there. :( Did all the tricks again, but nothing helped. Until I had the crazy idea to change my widget theme from the default Oxygen to Plastique. And hurray, the problem went away! It is not perfect, with dual monitor enabled sometimes maximizing a konsole window takes seconds, but still in general the desktop is now usable. And of course this should also make me have more uptime on battery.
Do I blame Oxygen? No, not directly. Although might make sense to investigate what causes there the NVidia driver going crazy and report to NVidia.

So in case you have similar problems, try to switch to 256.44 and if it doesn't help chose a different widget style.

Now, don't say me to use nouveau or nv. Nouveau gave me graphic artifacts and it (or KDE?) didn't remember the dual card setup. Nv failed the suspend to disk test with my machine and doesn't provide 3D acceleration needed eg. for Google Earth.

UPDATE: I upgraded my laptop to 4.5.1 (from openSUSE packages).Well, this broke composition completely, I got only black windows. I saw a new driver is available (256.53), let's try it. So far, so good, even with Oxygen. Let's see on the long run how it behaves, I didn't test it in deep.


          L’horloge de Fedora 7 reste figée        
Depuis que j’ai laissé l’horloge de Fedora 7 (sous Gnome) se synchroniser automatiquement, en utilisant un serveur de temps, elle affiche constamment la même heure (l’heure à la quelle la machine à démarrée), bien que l’horloge système indique l’heure correcte. Ce bug semble être résolu, en passant à la dernière version du kernel: 2.6.22.1-41 yum update Si vous [...]
          CD-i 180 internals        
In the previous post I promised some ROM and chip finds. Well, here goes. To understand some of the details, you'll need some microprocessor and/or digital electronics knowledge, but even without that the gist of the text should be understandable.

The CDI 181 MMC unit contains the so-called Maxi-MMC board that is not used in any other CD-i player. Its closest cousin is the Mini-MMC board that is used in the CD-i 605 and CD-i 220 F1 players (a derivative of it is used in the CD-i 350/360 players).

The Mini-MMC board uses two 68HC05 slave processors for CD and pointing device control (they are usually called SERVO and SLAVE). The Maxi-MMC board does not have these chips, but it does have two PCF80C21 slave processors labeled RSX and TRANSDUCER that perform similar functions.

From their locations on the board I surmise that the RSX performs CD control functions; I know for sure that the TRANSDUCER performs only pointing device control. The latter is connected to the main 68070 processor via an I2C bus (I've actually traced the connections); I'm not completely sure yet about the RSX.

In order to emulate the pointing devices in CD-i Emulator, I had to reverse engineer the I2C protocol spoken by the TRANSDUCER chip; this was mostly a question of disassembling the "ceniic" and "periic" drivers in the ROM. The first of these is the low-level driver that serves as the common control point for the I2C bus; the second is the high-level driver that is instantiated separately for each type of pointing device. The ROMs support three such devices: /cdikeys, /ptr and /ptr2, corresponding to the player control keys and first and second pointing devices (the first pointing device is probably shared between the infrared remote sensor and the left pointing device port). Both pointing devices support absolute (e.g. touchpad) as well as relative (e.g. mouse) positioning.

Note that there is no built-in support for a CD-i keyboard or modem (you could use a serial port for this purpose).

However, knowing the I2C protocol does not tell me the exact protocol of the pointing devices, which therefore brings me no closer to constructing a "pointing device" that works with the two front panel MiniDIN-9 connectors. Note that these connectors are physically different from the MiniDIN 8 pointing device connectors used on most other CD-i players. According to the Philips flyers, these connectors have 6 (presumably digital) input signals and a "strobe" (STB) output signal. From the signal names I can make some educated guesses about the probable functions of the signals, but some quick tests with the BTN1 and BTN2 inputs did not pan out and it could be too complicated to figure out without measurement of a connected and working pointing device.

There is, however, also an infrared remote sensor that is supposed to expect the RC5 infrared signal protocol. This protocol supports only 2048 separate functions (32 groups of 64 each) so it should not be impossible to figure out, given a suitably programmable RC5 remote control or in the best case a PC RC5 adapter. I've been thinking about building one of the latter.

There is also a third possibility of getting a working pointing device. Although the case label of the front MiniDIN 8 connecter is "CONTROL", the Philips flyers label it "IIC" which is another way of writing "I2C", although they don't give a pinout of the port. It seems plausible that the connector is connected to the I2C bus of the 68070, although I haven't been able to verify that yet (the multimeter finds no direct connections except GND, so some buffering must be involved). If there is indeed a connection, I would be able to externally connect to that bus and send and receive the I2C bus commands that I've already reverse engineered.

If even this doesn't work, I can always connect directly to the internal I2C bus, but that involves running three wires from inside the player to outside and I'm not very keen on that (yet, anyway).

Now, about the (so far) missing serial port. There is a driver for the 68070 on-chip UART in the ROMs (the u68070 driver which is accessible via the /t2 device), and the boot code actually writes a boot message to it (CD-i Emulator output):
  PHILIPS CD-I 181 - ROM version 23rd January, 1992.
Using CD_RTOS kernel edition $53 revison $00
At first I thought that the UART would be connected to the "CONTROL" port on the front, but that does not appear to be the case. Tonight I verified (by tracing PCB connections with my multimeter) that the 68070 serial pins are connected to the PCB connector on the right side (they go through a pair of SN75188/SN75189 chips and some protection resistors; these chips are well-known RS232 line drivers/receivers). I even know the actual PCB pins, so if I can find a suitable 100-pins 0.01" spaced double edge print connector I can actually wire up the serial port.

Now for the bad news, however: the ROMs do not contain a serial port download routine. They contain a host of other goodies (more below) but not this particular beast. There is also no pointing device support for this port, contrary to all other players, so connecting up the serial port would not immediately gain me anything, I still need a working pointing device to actually start a CD-i disc…

There are no drivers for other serial ports in the ROMs, but the boot code does contain some support for a UART chip at address $340001 (probably a 68681 DUART included in the CDI 182 unit which I don't have). The support, however, is limited to the output of boot messages although the ROMs will actually prefer this port over the 68070 on-chip device if they find it.

As is to be expected from a development and test player, there is an elaborate set of boot options, but they can only be used if the ROMs contain the signature "IMS-TC" at byte offset $400 (the ROMs in my player contains FF bytes at these locations). And even then the options prompt will not appear unless you press the space bar on your serial terminal during player reset.

However, adding a -bootprompt option that handles both the signature and the space bar press to CD-i Emulator was not hard, and if you use that option with the 180 ROMs the following appears when resetting the player:
  PHILIPS CD-I 181 - ROM version 23rd January, 1992.

A-Z = change option : <BSP> = clear options : <RETURN> = Boot Now

Boot options:- BQRS
As specified, you can change the options by typing letters and pressing Enter will start the boot process with the specified options.

From disassembling the boot code of the ROMs I've so far found the following options:

D = Download/Debug
F = Boot from Floppy
L = Apply options and present another options prompt (Loop)
M = Set NTSC Monitor mode
P = Set PAL mode
S = Set NTSC/PAL mode from switch
T = Set NTSC mode
W = Boot from SCSI disk (Winchester)

It could be that there's also a C option, and I've as yet not found any implementations of the Q and R options that the ROMs include in the default, but they could be hidden in OS-9 drivers instead of the boot code.

Once set, the options are saved in NVRAM at address $313FE0 as default for prompts during subsequent reboots, they are not used for reboots where the option prompt is not invoked.

Options D, F and W look interesting, but further investigation leads to the conclusion that they are mostly useless without additional hardware.

Pressing lower-case D followed by Enter / Enter results in the following:
Boot options:- BQRSd
Boot options:- BDQRS
Enter size of download area in hex - just RETURN for none
called debugger

Rel: 00000000
Dn: 00000000 0000E430 0007000A 00000000 00000000 00000001 FFFFE000 00000000
An: 00180B84 00180570 00313FE0 00410000 00002500 00000500 00001500 000014B0
SR: 2704 (--S--7-----Z--) SSP: 000014B0 USP: 00000000
PC: 00180D2E - 08020016 btst #$0016,d2
debug:
One might think that entering a download size would perform some kind of download (hopefully via the serial port) but that is not the case. The "download" code just looks at location $2500 in RAM that's apparently supposed to be already filled (presumably via an In-Circuit Emulator or something like it).

However, invoking the debugger is interesting in itself. It looks like the Microware low-level RomBug debugger that is described in the Microware documentation, although I haven't found it in any other CD-i players. One could "download" data with the change command:
debug: c0
00000000 00 : 1
00000001 00 : 2
00000002 15 : 3
00000003 00 :
Not very userfriendly but it could be done. The immediate catch is that it doesn't work with unmodified ROMs because of the "IMS-TC" signature check!

Trying the F option results in the following:
Boot options:- BQRSf
Boot options:- BFQRS
Booting from Floppy (WD 179x controller) - Please wait
This, however, needs the hardware in the CDI 182 set (it lives at $330001). I could emulate that in CD-i Emulator of course, but there's no real point at this time. It is interesting to note that the floppy controller in the CD-i 605 (which I haven't emulated either at this point) is a DP8473 which is register compatible with the uPD765A used in the original IBM PC but requires a totally different driver (it also lives at a different memory address, namely $282001).

Finally, trying the W options gives this:
Boot options:- BQRSw
Boot options:- BQRSW
Booting from RODIME RO 650 disk drive (NCR 5380 SCSI) - Please wait
Exception Error, vector offset $0008 addr $00181908
Fatal System Error; rebooting system
The hardware is apparently supposed to live at $410000 and presumably emulatable; it's identical or at least similar to the DP5380 chip that is found on the CD-i 605 extension board where it lives at $AA0000).

Some other things that I've found out:

The CDI 181 unit has 8 KB of NVRAM but it does not use the M48T08 chip that's in all other Philips players, it's just a piece of RAM that lives at $310000 (even addresses only) and is supported by the "nvdrv" driver via the /nvr device.

In the CD-i 180 player the timekeeping functions are instead performed by a RICOH RP5C15 chip, the driver is appropriately called "rp5c15".

And there is a separate changeable battery inside the case; no "dead NVRAM" problems with this player! I don't know when the battery in my player was last changed but at the moment it's still functioning and had not lost the date/time when I first powered it on just over a week ago.

The IC CARD slot at the front of the player is handled like just another piece of NVRAM; it uses the same "nvdrv" driver but a different device: /icard. According to the device descriptor it can hold 32 KB of data, I would love to have one of those!
          CD-i 180 adventures        
Over the last week I have been playing with the CD-i 180 player set. There’s lots to tell about, so this will be a series of blog posts, this being the first installment.

The CD-i 180 is the original CD-i player, manufactured jointly by Philips and Sony/Matsushita, and for a score of years it was the development and “reference” player. The newer CD-i 605 player provided a more modern development option but it did not become the “reference” player for quite some years after its introduction.

The CD-i 180 set is quite bulky, as could be expected for first-generation hardware. I have added a picture of my set to the Hardware section of the CD-i Emulator website; more fotos can be found here on the DutchAudioClassics.nl website (it’s the same player, as evidenced by the serial numbers).

The full set consists of the CDI 180 CD-i Player module, the CDI 181 Multimedia Controller or MMC module and the CDI 182 Expansion module. The modules are normally stacked on top of each other and have mechanical interlocks so they can be moved as a unit. Unfortunately, I do not have the CDI 182 Expansion module nor any user manuals; Philips brochures for the set can be found here on the ICDIA website.

Why am I interested in this dinosaur? It’s the first mass-produced CD-i player (granted, for relatively small masses), although there were presumably some earlier prototype players. As such, it contains the “original” hardware of the CD-i platform, which is interesting from both a historical and an emulation point of view.

For emulation purposes I have been trying to get hold of CD-i 180 ROMs for some years, there are several people that still have fully operational sets, but it hasn’t panned out yet. So when I saw a basic set for sale on the CD-Interactive forum I couldn’t resist the temptation. After some discussion and a little bartering with the seller I finally ordered the set about 10 days ago. Unfortunately, this set does not include a CDI 182 module or pointing device.

I had some reservations about this being a fully working set, but I figured that at least the ROM chips would probably be okay, if nothing else that would allow me to add support for this player type to CD-i Emulator.

In old hardware the mechanical parts are usually the first to fail, this being the CDI 180 CD-i Player module (which is really just a CD drive with a 44.1 kHz digital output “DO” signal). A workaround for this would be using an E1 or E2 Emulator unit; these are basically CD drive simulators that on one side read a CD-i disc image from a connected SCSI hard disk and on the other side output the 44.1 kHz digital output “DO” signal. Both the CDI 180 and E1/E2 units are controlled via a 1200 baud RS232 serial input “RS” signal.

From my CD-i developer days I have two sets of both Emulator types so I started taking these out of storage. For practical reasons I decided to use an E1 unit because it has an internal SCSI hard disk and I did not have a spare one lying around. I also dug out an old Windows 98 PC, required because the Philips/OptImage emulation software doesn’t work under Windows XP and newer, and one of my 605 players (I also have two of those). Connecting everything took me a while but I had carefully stored all the required cables as well and after installing the software I had a working configuration after an hour or so. The entire configuration made quite a bit of mechanical and fan noise; I had forgotten this about older hardware!

I had selected the 605 unit with the Gate Array AH02 board because I was having emulation problems with that board, and I proceeded to do some MPEG tests on it. It turns out the hardware allows for some things that my emulator currently does not, which means that I need to do some rethinking. Anyway, on with the 180 story.

In preparation for the arrival of the 180 set I next prepared an disc image of the “OS-9 Disc” that I created in November 1993 while working as a CD-i developer. This disc contains all the OS-9 command-line programs from Professional OS-9, some OS-9 and CD-i utilities supplied by Philips and Microware and some homegrown ones as well. With this disc you can get a fully functional command-line prompt on any CD-i player with a serial port, which is very useful while researching a CD-i player’s internals.

The Philips/Optimage emulation software requires the disc image files to include the 2-second gap before logical block zero of the CD-i track, which is not usually included in the .bin or .iso files produced by CD image tools. So I modified the CD-i File program to convert my existing os9disc.bin file by prepending the 2-second gap, in the process also adding support for scrambling and unscrambling the sector data.

Scrambling is the process of XORing all data bytes in a CD-ROM or CD-i sector with a “scramble pattern” that is designed to avoid many contiguous identical data bytes which can supposedly confuse the tracking mechanism of CD drives (or so I’ve heard). It turned out that scrambling of the image data was not required but it did allow me to verify that the CD-I File converted image of a test disc is in fact identical to the one that the Philips/Optimage mastering tools produce, except for the ECC/EDC bytes of the gap sectors which CD-I File doesn’t know how to generate (yet). Fortunately this turned out not to be a problem, I could emulate the converted image just fine.

Last Thursday the 180 set arrived and in the evening I eagerly unpacked it. Everything appeared to be in tip-top shape, although the set had evidently seen use.

First disappointment: there is no serial port on the right side of 181 module. I remembered that this was actually an option on the module and I had not even bothered to ask the seller about it! This would make ROM extraction harder, but I was not completely without hope: the front has a Mini-DIN 8 connector marked “CONTROL” and I fully expected this to be a “standard” CD-i serial port because I seemed to remember that you could connect standard CD-i pointing devices to this port, especially a mouse. The built-in UART functions of the 68070 processor chip would have to be connected up somewhere, after all.

Second disappointment: the modules require 120V power, not the 220V we have here in Holland. I did not have a voltage converter handy so after some phone discussion with a hardware-knowledgeable friend we determined that powering up was not yet a safe option. He gave me some possible options depending on the internal configuration so I proceeded to open up the CDI 181 module, of course also motivated by curiosity.

The first thing I noticed was that there were some screws missing; obviously the module had been opened before and the person doing it had been somewhat careless. The internals also seemed somewhat familiar, especially the looks of the stickers on the ROM chips and the placement of some small yellow stickers on various other chips.

Proceeding to the primary reason for opening up the module, I next checked the power supply configuration. Alas, nothing reconfigurable for 220V, it is a fully discrete unit with the transformer actually soldered to circuit board on both input and output side. There are also surprisingly many connections to the actual MMC processor board and on close inspection weird voltages like –9V and +9V are printed near the power supply outputs, apart from the expected +5V and +/–12V, so connecting a different power supply would be a major undertaking also.

After some pondering of the internals I closed up the module again and proceeded to closely inspect the back side for serial numbers, notices, etc. They seemed somewhat familiar but that isn’t weird as numbers often do. Out of pure curiosity I surfed to the DutchAudioClassics.nl website to compare serial numbers, wanting to know the place of my set in the production runs.

Surprise: the serial numbers are identical! It appears that this exact set was previously owned by the owner of that website or perhaps he got the photographs from someone else. This also explained why the internals had seemed familiar: I had actually seen them before!

I verified with the seller of the set that he doesn’t know anything about the photographs; apparently my set has had at least four owners, assuming that the website owner wasn’t the original one.

On Friday I obtained a 120V converter (they were unexpectedly cheap) and that evening I proceeded to power up the 180 set. I got a nice main menu picture immediately so I proceeded to attempt to start a CD-i disc. It did not start automatically when I inserted it, which on second thought makes perfect sense because the 181 MMC module has no way to know that you’ve just inserted a disc: this information is not communicated over 180/181 interconnections. So I would need to click on the “CD-I” button to start a disc.

To click on a screen button you need a supported pointing device, so I proceeded to connect the trusty white professional CD-i mouse that belongs with my 605 players. It doesn’t work!

There are some mechanical issues which make it doubtful that the MiniDIN connector plugs connect properly, so I tried an expansion cable that fit better. Still no dice.

The next step was trying some other CD-i pointing devices, but none of them worked. No pointing devices came with the set, and the seller had advised me thus (they were presumable lost or sold separately by some previous owner). The only remaining option seemed to be the wireless remote control sensor which supposedly uses RC5.

I tried every remote in my home, including the CD-i ones, but none of them give any reaction. After some research into the RC5 protocol this is not surprising, the 180 set probably has a distinct system address code. Not having a programmable remote handy nor a PC capable of generating infrared signals (none of my PCs have IrDA) I am again stuck!

I spent some time surfing the Internet looking for RC5 remotes and PC interfaces that can generate RC5 signals. Programmable remotes requiring a learning stage are obviously not an option so it will have to be a fully PC-programmable remote which are somewhat expensive and I’m not convinced they would work. The PC interface seems the best option for now; I found some do-it-yourself circuits and kits but it is all quite involved. I’ve also given some thought to PIC kits which could in principle also support a standard CD-i or PC mouse or even a joystick, but I haven’t pursued these options much further yet.

Next I went looking for ways to at least get the contents of the ROM chips as I had determined that these were socketed inside the MMC module and could easily be removed. There are four 27C100 chips inside the module, each of which contains 128Kb of data for a total of 512Kb which is the same as for the CD-i 605 player (ignoring expansion and full-motion video ROMs). The regular way to do this involves using a ROM reading device, but I haven’t gotten one handy that supports this chip type and neither does the hardware friend I mentioned earlier.

I do have access to an old 8 bit Z80 hobbyist-built system capable of reading and writing up to 27512 chips which are 64Kb, it is possible to extend this to at least read the 27C100 chip type. This would require adapting the socket (the 27512 is 28 pins whereas the 27C100 has 32 pins) and adding one extra address bit, if nothing else with just a spare wire. But the Z80 system is not at my house and some hardware modifications to it would be required, for which I would have to inspect the system first and dig up the circuit diagrams; all quite disappointing.

While researching the chip pinouts I suddenly had an idea: what if I used the CD-i 605 Expansion board which also has ROM sockets? This seemed an option but with two kids running around I did not want to open up the set. That evening however I took the board out of the 605 (this is easily done as both player and board were designed for it) and found that this Expansion board contains two 27C020 chips, each containing 256Kb of data. These are also 32 pins but the pinouts are a little different, so a socket adapter would also be needed. I checked the 605 technical manual and it did not mention anything about configurable ROM chip types (it did mention configurable RAM chip types, though) so an adapter seemed the way to go. I collected some spare 40 pin sockets from storage (boy have I got much of that) and proceeded to open up the 180 set and take out the ROM chips.

When determining the mechanical fit of the two sockets for the adapter I noticed three jumpers adjacent to the ROM sockets of the expansion board and I wondered… Tracing of the board connections indicated that these jumpers were indeed connected to exactly the ROM socket pins differing between 27C100 and 27C020, and other connections indicated it at least plausible for these jumpers to be exactly made for the purpose.

So I changed the jumpers and inserted one 180 ROM. This would avoid OS-9 inadvertently using data from the ROM because only half of each 16-bit word would be present, thus ensuring that no module headers would be detected, and in the event of disaster I would lose only a single ROM chip (not that I expected that to be very likely, but you never know).

Powering up the player worked exactly as expected, no suspicious smoke or heat generation, so the next step was software. It turns out that CD-i Link already supports downloading of ROM data from specific memory addresses and I had already determined those addresses from the 605 technical manual. So I connected the CD-i 605 null-modem cable with my USB-to-Serial adapter between CD-i player and my laptop and fired off the command line:

cdilink –p 3 –a 50000 –s 256K –u u21.rom

(U21 being the socket number of the specific ROM I chose first).

After a minute I aborted the upload and checked the result, and lo and behold the u21.rom file looked like an even-byte-only ROM dump:
00000000  4a00 000b 0000 0000 0004 8000 0000 0000 J...............
00000010 0000 0000 0000 003a 0000 705f 6d6c 2e6f .......:..p_ml.o
00000020 7406 0c20 0000 0000 0101 0101 0101 0101 t.. ............
This was hopeful, so I restarted the upload again and waited some six minutes for it to complete. Just for sure I redid the upload from address 58000 and got an identical file, thus ruling out any flakey bits or timing problems (I had already checked that the access times on the 27C100 and 27C020 chips were identical, to say 150ns).

In an attempt to speed up the procedure, I next attempted to try two ROMs at once, using ones that I thought not to be a matched even/odd set. The 605 would not boot! It later turned out that the socket numbering did not correspond to the even/odd pairing as I expected so this was probably caused by the two ROMs being exactly a matched set and OS-9 getting confused as the result. But using a single ROM it worked fine.

I proceeded to repeat the following procedure for the next three ROMs: turn off the 605, remove the expansion board, unsocket the previous ROM chip, socket the next ROM chip, reinsert the expansion board, turn on the 605 and run CD-i Link twice. It took a while, all in all just under an hour.

While these uploads were running I wrote two small programs rsplit and rjoin to manipulate the ROM files into a correct 512Kb 180 ROM image. Around 00:30 I had a final cdi180b.rom file that looked good and I ran it through cditype –mod to verify that it indeed looked like a CD-I player ROM:
  Addr     Size      Owner    Perm Type Revs  Ed #  Crc   Module name
-------- -------- ----------- ---- ---- ---- ----- ------ ------------
0000509a 192 0.0 0003 Data 8001 1 fba055 copyright
0000515a 26650 0.0 0555 Sys a000 83 090798 kernel
0000b974 344 0.0 0555 Sys 8002 22 b20da9 init
0000bacc 2848 0.0 0555 Fman a00b 35 28611f ucm
0000c5ec 5592 0.0 0555 Fman a000 17 63023d nrf
0000dbc4 2270 0.0 0555 Fman a000 35 d6a976 pipeman
0000e4a2 774 0.0 0555 Driv a001 6 81a3e9 nvdrv
0000e7a8 356 0.0 0555 Sys a01e 15 e69105 rp5c15
0000e90c 136 0.0 0555 Desc 8000 1 f25f23 tim070
0000e994 420 0.0 0555 Driv a00c 6 7b3913 tim070driv
0000eb38 172 0.0 0555 Driv a000 1 407f81 null
0000ebe4 102 0.0 0555 Desc 8000 2 cf450e pipe
0000ec4a 94 0.0 0555 Desc 8000 1 f54010 nvr
0000eca8 96 0.0 0555 Desc 8000 1 17ec68 icard
0000ed08 1934 0.0 0555 Fman a000 31 b41f17 scf
0000f496 120 0.0 0555 Desc 8000 61 dd8776 t2
0000f50e 1578 0.0 0555 Driv a020 16 d0a854 u68070
0000fb38 176 0.1 0777 5 8001 1 a519f6 csd_mmc
0000fbe8 5026 0.0 0555 Sys a000 292 e33cc5 csdinit
00010f8a 136 0.0 0555 Desc 8000 6 041e2b iic
00011012 152 0.0 0555 Driv a02c 22 e29688 ceniic
000110aa 166 0.0 0555 Desc 8000 8 c5b823 ptr
00011150 196 0.0 0555 Desc 8000 8 a0e276 cdikeys
00011214 168 0.0 0555 Desc 8000 8 439a33 ptr2
000112bc 3134 0.0 0555 Driv a016 11 faf88d periic
00011efa 4510 0.0 0555 Fman a003 96 a4d145 cdfm
00013098 15222 0.0 0555 Driv a038 28 122c79 cdap18x
00016c0e 134 0.0 0555 Desc 8000 2 35f12f cd
00016c94 134 0.0 0555 Desc 8000 2 d2ce2f ap
00016d1a 130 0.0 0555 Desc 8000 1 1586c2 vid
00016d9c 18082 10.48 0555 Trap c00a 6 5f673d cio
0001b43e 7798 1.0 0555 Trap c001 13 46c5dc math
0001d2b4 2992 0.0 0555 Data 8020 1 191a59 FONT8X8
0001de64 134 0.0 0555 Desc 8000 2 c5ed0e dd
0001deea 66564 0.0 0555 Driv a012 48 660a91 video
0002e2ee 62622 0.1 0555 Prog 8008 20 ec5459 ps
0003d78c 4272 0.0 0003 Data 8001 1 9f3982 ps_medium.font
0003e83c 800 0.0 0003 Data 8002 1 c1ac25 ps_icons.clut
00040000 2976 0.0 0003 Data 8002 1 0a3b97 ps_small.font
00040ba0 7456 0.0 0003 Data 8002 1 764338 ps_icons.clu8
000428c0 107600 0.0 0003 Data 8002 1 7b9b4e ps_panel.dyuv
0005cd10 35360 0.0 0003 Data 8001 1 2a8fcd ps_girl.dyuv
00065730 35360 0.0 0003 Data 8002 1 e1bb6a ps_mesa.dyuv
0006e150 35360 0.0 0003 Data 8002 1 8e394b ps_map.dyuv
00076b70 35360 0.0 0003 Data 8002 1 c60e5e ps_kids.dyuv

File Size Type Description
------------ ------ ------------ ------------
cdi180b.rom 512K cdi000x.rom Unknown CD-i system ROM
cdi180b.rom 512K cdi000x.mdl Unknown CD-i player
cdi180b.rom 512K unknown.brd Unknown board
Of course cditype didn’t correctly detect the ROM, player and board type, but the list of modules looks exactly like a CD-i player system ROM. It is in fact very similar to the CD-i 605 system ROM, the major differences are the presence of the icard and *iic drivers, the absence of a slave module and the different player shell (ps module with separate ps_* data modules instead of a single play module).

It being quite late already, I resocketed all the ROMs in the proper places and closed up both players, after testing that they were both fully functional (insofar as I could test the 180 set), fully intending to clean up and go to bed. As an afterthought, I took a picture of the running 180 set and posted it on the CD-Interactive forums as the definitive answer to the 50/60 Hz power question I’d asked there earlier.

The CD-i Emulator urge started itching however, so I decided to give emulation of my new ROM file a quick go, fully intending to stop at any major problems. I didn’t encounter any of those, however, until I had a running CD-i 180 player three hours later. I reported the fact on the CDinteractive forum, noting that there was no pointing device or disc access yet, and went to a well-deserved sleep. Both of these issues are major ones and those I postponed for the next day.

To get the new player type up and running inside CD-i Emulater, I started by using the CD-i 605 F1 system specification files cdi605a.mdl and minimmc.brd as templates to create the new CD-i 180 F2 system files cdi180b.mdl and maximmc.brd. Next I fired up the emulator and was rewarded with bus errors. Not unexpected and a good indicator of where the problems are. Using the debugger and disassembler I quickly determined that the problems were, as expected, the presence of the VSR instead of VSD and the replacement of the SLAVE by something else. Straightening these out took a bit of time but it was not hard work and very similar to work I had done before on other player types.

This time at least the processor and most of the hardware was known and already emulated; for the Portable CD-i board (used by the CD-i 370, DVE200 and GDI700 players) both of these were not the case as they use the 68341 so-called integrated CD-i engine which in my opinion is sorely misnamed as there is nothing CD-i about the chip, it is just the Motorola version of an 68K processor with many on-chip peripherals in remarkably similar to the Philips 68070 in basic functionality.

Saturday was spent doing household chores with ROM research in between, looking for the way to get the pointing device working. It turned out to be quite involved but at the end of the day I had it sort of flakily working in a kludgy way; I’ll report the details in a next blog post.

Sunday I spent some time fixing the flakiness and thinking a lot about fixing the kludginess; this remains to be done. I also spent time making screenshots and writing this blog post.

So to finish up, there is now a series of 180 screenshots here on the CD-i Emulator website as reported in the What's New section. A very nice player shell, actually, especially for a first generation machine.

I will report some ROM and chip finds including new hopes for replacing the missing pointing device in a next blog post.
          ROM-less emulation progress        
Over the last two weeks I have implemented most of the high-level emulation framework that I alluded to in my last post here as well as a large number of tracing wrappers for the original ROM calls. In the next stage I will start replacing some of those wrappers with re-implementations, starting with some easy ones.

It turns out I was somewhat optimistic; so far I have wrapped over 450 distinct ROM entry points (the actual current number of wrappers is 513 but there are some error catchers and possible duplicates). Creating the wrappers and writing and debugging the framework took more effort then I expected, but it was worth it: every call to a ROM entry point described or implied by the Green Book or OS-9 documentation is now wrapped with a high-level emulation function that so far does nothing except calling the original ROM routine and tracing its input/output register values.

Surely there aren't that many application-callable API functions, I can hear you think? Well actually there are, for sufficiently loose definitions of "application-callable". You see, the Green Book specifies CD-RTOS as being OS-9 and every "trick" normally allowed under OS-9 is theoretically legal in a CD-i title. That includes bypassing the OS-supplied file managers and directly calling device drivers; there are many CD-i titles that do some of this (the driver interfaces are specified by the Green Book). In particular, all titles using the Balboa library do this.

I wanted an emulation framework that could handle this so my framework is built around the idea of replacing the OS-9 module internals but retaining their interfaces, including all the documented (and possibly some undocumented) data structures. One of the nice features of this approach is that native ROM code can be replaced by high-level emulation on a routine-by-routine basis.

How does it really work? As a start, I've enhanced the 68000 emulation to possibly invoke emulation modules whenever an emulated instruction generates one of the following processor exceptions: trap, illegal instruction, line-A, line-F.

The emulation modules can operate in two modes: either copy an existing ROM module and wrap its entry points, or generate an entirely new memory module. In both cases, the emulation module will emit line-A instructions at the appropriate points. The emitted modules will go into a memory area appropriately called "emurom" that the OS-9 kernel scans for modules. Giving the emitted modules identical names but higher revision numbers then the ROM modules will cause the OS-9 kernel to use the emitted modules.

This approach works for every module except the kernel itself, because it is entered by the boot code before the memory scan for modules is even performed. The kernel emulation module will actually patch the ROM kernel entry point so that it jumps to the emitted kernel module.

The emitted line-A instructions are recognized by the emulator disassembler; they are called "modcall" instructions (module call). Each such instruction corresponds to a single emulation function; entry points into the function (described below) are indicated by the word immediately following it in memory. For example, the ROM routine that handles the F$CRC system call now disassembles like this:

modcall kernel:CRC:0
jsr XXX.l
modcall kernel:CRC:$
rts

Here the XXX is the absolute address of the original ROM routine for this system call; the two modcall instructions trace the input and output registers of this handler. If the system call were purely emulated (no fallback to the original ROM routine) it would look like this:

modcall kernel:CRC:0
modcall kernel:CRC:$
rts

Both modcall instructions remain, although technically the latter is now unnecessary, but the jsr instruction has disappeared. Technically, the rts instruction could also be eliminated but it looks more comprehensible this way.

One could view the approach as adding a very powerful "OS-9 coprocessor" to the system.

If an emulation function has to make inter-module calls, complications arise. High-level emulation context cannot cross module boundaries, because the called module may be native (and in many cases even intra-module calls can raise this issue). For this reason, emulation functions need additional entry points where the emulation can resume after making such a call. The machine language would like this, e.g. for the F$Open system call:

modcall kernel:Open:0
modcall kernel:Open:25
modcall kernel:Open:83
modcall kernel:Open:145
modcall kernel:Open:$
rts

The numbers following the colon are relative line numbers in the emulation function. When the emulation function needs to make a native call, it pushes the address of one such modcall instruction on the native stack, sets the PC register to the address it wants to call and resumes instruction emulation. When the native routine returns, it will return to the modcall instruction which will re-enter the emulation function at the appropriate point.

One would expect that emulation functions making native calls need to be coded very strangely: a big switch statement on the entry code (relative line number), followed by the appropriate code. However, a little feature of the C and C++ languages allows the switch statement to be mostly hidden. The languages allow the case labels of a switch statement to be nested arbitrarily deep into the statements inside the switch.

The entire contents of emulation functions are encapsulated inside a switch statement on the entry number (hidden by macros):

switch (entrynumber)
{
case 0:
...
}

On the initial call, zero is passed for entrynumber so the function body starts executing normally. Where a native call needs to be made, the processor registers are set up (more on this below) and a macro is invoked:

MOD_CALL(address);

This macro expands to something like this:

MOD_PARAMS.SetJumpAddress(address);
MOD_PARAMS.SetReturnLine(__LINE__);
return eMOD_CALL;
case __LINE__:

Because this is a macro expansion, both invokations of the __LINE__ macro will expand to the line number of the MOD_CALL macro invokation.

What this does is to save the target address and return line inside MOD_PARAMS and then return from the emulation function with value eMOD_CALL. This value causes the wrapper code to push the address of the appropriate modcall instruction and jump to the specified address. When that modcall instruction executes after the native call returns, it passes the return line to the emulation function as the entry number which will dutifully switch on it and control will resume directly after the MOD_CALL macro.

In reality, the code uses not __LINE__ but __LINE__ - MOD_BASELINE which will use relative line numbers instead of absolute ones; MOD_BASELINE is a constant defined as the value of __LINE__ at the start of the emulation function.

The procedure described above has one serious drawback: emulation functions cannot have "active" local variables at the point where native calls are made (the compiler will generate errors complaining that variable initialisations are being skipped). However, the emulated processor registers are available as temporaries (properly saved and restored on entry and exit of the emulation function if necessary) which should be good enough. Macros are defined to make accessing these registers easy.

When native calls need to be made, the registers must be set up properly. This would lead to constant "register juggling" before and after each call, which is error-prone and tedious. To avoid it, it is possible to use two new sets of registers: the parameter set and the results set. Before a call, the parameter registers must be set up properly; the call will then use these register values as inputs and the outputs will be stored in the results registers (register juggling will be done by the wrapper code). The parameter registers are initially set to the values of the emulated processor registers and also set from the results registers after each call.

The following OS-9 modules are currently wrapped:

kernel nrf nvdrv cdfm cddrv ucm vddrv ptdrv kbdrv pipe scf scdrv

The *drv modules are device drivers; their names must be set to match the ones used in the current system ROM in order to properly override those. The *.brd files in the sys directory have been extended to include this information like this:

** Driver names for ROM emulation.
set cddrv.name=cdapdriv
set vddrv.name=video
set ptdrv.name=pointer
set kbdrv.name=kb1driv

The kernel emulation module avoids knowledge of system call handler addresses inside the kernel by trapping the first "system call" so that it can hook all the handler addresses in the system and user mode dispatch tables to their proper emulation stubs. This first system call is normally the I$Open call for the console device.

File manager and driver emulation routines hook all the entry points by simply emitting a new entry point table and putting the offset to it in the module header. The offsets in the new table point to the entry point stubs (the addresses of the original ROM routines are obtained from the original entry point table).

The above works fine for most modules, but there was a problem with the video driver because it is larger then 64KB (the offsets in the entry point are 16-bit values relative to the start of the module). Luckily there is a text area near the beginning of the original module (it is actually just after the original entry point table) that can be used for a "jump table" so all entry point offsets fit into 16 bits. After this it should have worked, but it didn't because it turns out that UCM has a bug that requires the entry point table to *also* be in the first 64KB of the module (it ignores the upper 16-bits of the 32-bit offset to this table in the module header). This was fixed by simply reusing the original entry point table in this case.

One further complication arose because UCM requires the initialisation routines of drivers to also store the absolute addresses of their entry points in UCM variables. These addresses were "hooked" by adding code to the initialisation emulation routine that changes these addresses to point to the appropriate modcall instructions.

All file managers and drivers contain further dispatching for the SetStat and GetStat routines, based on the contents of one or two registers. Different values in these registers will invoke entirely separate functions with different register conventions; they really must be redirected to different emulation functions. This is achieved by lifting the dispatching to the emulation wrapper code (it is all table-driven).

Most of the above has been implemented, and CD-i emulator now traces all calls to ROM routines (when emurom is being used). A simple call to get pointing device coordinates would previously trace as follows (when trap tracing was turned on with the "et trp" command):

@00DF87E4(cdi_app) TRAP[5812] #0 I$GetStt <= d0.w=7 d1.w=SS_PT d2.w=PT_Coord
@00DF87E8(cdi_app) TRAP[5812] #0 I$GetStt => d0.w=$8000 d1.l=$1EF00FD

Here the input value d0.w=7 is the path number of the pointing device; the resulting mouse coordinates are in d1.l and correspond to (253,495),

When modcall tracing is turned on, this "simple" call will trace as follows:

@00DF87E4(cdi_app) TRAP[5812] #0 I$GetStt <= d0.w=7 d1.w=SS_PT d2.w=PT_Coord
@00F86EE0(kernel) MODCALL[16383] kernel:GetStt:0 <= d0.w=7 d1.w=$59 [Sys]
@00F86D10(kernel) MODCALL[16384] kernel:CCtl:0 <= d0.l=2 [NoTrap]
@00F86D1A(kernel) MODCALL[16384] kernel:CCtl:$ =>
@00F8A460(ucm) MODCALL[16385] ucm:GetPointer:0 <= u_d0.w=7 u_d2.w=0
@00FA10A4(pointer) MODCALL[16386] pointer:PtCoord:0 <= d0.w=7
@00FA10AE(pointer) MODCALL[16386] pointer:PtCoord:$ => d0.w=$8000 d1.l=$1EF00FD
@00F8A46A(ucm) MODCALL[16385] ucm:GetPointer:$ =>
@00F86D10(kernel) MODCALL[16387] kernel:CCtl:0 <= d0.l=5 [NoTrap]
@00F86D1A(kernel) MODCALL[16387] kernel:CCtl:$ =>
@00F86EEA(kernel) MODCALL[16383] kernel:GetStt:$ =>
@00DF87E8(cdi_app) TRAP[5812] #0 I$GetStt => d0.w=$8000 d1.l=$1EF00FD

You can see that the kernel dispatches this system call to kernel:GetStt, the handler for the I$GetStt system call. It starts by doing some cache control and then calls the GetStat entry point of the ucm modules, which dispatches it to its GetPointer routine. This routine in turn calls the GetStat routine of the pointer driver, which dispatches it to its PtCoord routine. This final routine performs the actual work and returns the results, which are then ultimately returned by the system call, after another bit of cache control.

The calls to ucm:GetStat and pointer:GetStat are no longer visible in the above trace as the emulation wrapper code directly dispatches them to ucm:GetPointer and pointer:PtCoord, respectively; it doesn't even trace the dispatching because this would result in another four lines of tracing output.

As a sidenote, all of the meticulous cache and address space control done by the kernel is really wasted, as CD-i systems do not need these. But the calls are still being made, which makes the kernel needlessly slow; one major reason why calling device drivers directly is often done. Newer versions of OS-9 eliminate these calls by using different kernel flavors for different processors and hardware configurations.

The massive amount of tracing needs to be curtailed somewhat before further work can productively be done; this is what I will start with next.

I have already generated fully documented stub functions for the OS-9 kernel from the OS-9 technical documentation; I will also need to generate for all file manager and driver calls, based on the digital Green Book.

It is perhaps noteworthy that some kernel calls are not described in any of the OS-9 version 2.4 documentation that I was able to find, but they *are* described in the online OS-9/68000 version 3.0 documentation.

Some calls made by the native ROMs remain undocumented but those mostly seem to be CD-i system-control (for example, one of them sets the front display text). Of the OS-9 kernel calls, only the following ones are currently undocumented:

F$AllRAM
F$FModul
F$POSK

Their existence was inferred by the appropriate constants existing in the compiler library files, but I have not seen any calls to them (yet).
          CD-i Emulator Cookbook        
Just a quick note that work on CD-i Emulator hasn't stopped.

I have some wild ideas about ROM-less emulation; this would basically mean re-implementing the CD-RTOS operating system. Somewhat daunting; it contains over 350 separate explicit APIs and callable entry points and many system data structures would need to be closely emulated. But it can be done, CD-ice proved it (although it took a number of shortcuts that I want to avoid).

I'm not going to tackle that by myself; my current thinking is to make a start by implementing a high-level emulation framework, tracing stubs for all the calls (luckily these can mostly be generated automatically from the digital Green Book and OS-9 manuals) and some scaffolding and samples.

One of the pieces of scaffolding would be a really simple CD-i player shell; one that just shows a big "Play CD-i" button and then starts the CD-i title :-)

For samples I'm thinking about a few easy system calls like F$CRC, F$SetCRC, F$SetSys, F$CmpNam, F$PrsNam, F$ID, F$SUser, F$Icpt, F$SigMask, F$STrap, F$Trans, F$Move, F$SSvc (I may not get through the entire list) and a new NVRAM File Manager (NRF).

It would be nice to do a minimal UCM with Video and Pointer driver so that the simple CD-i player shell would run, but that might be too much. We'll see.

However, it's the new NRF that would be the most immediately interesting for CD-i Emulator users. It would intercept NVRAM access at the file level and redirect it to the PC file system (probably to files in the the nvr directory). This would allow easy sharing of CD-i NVRAM files (e.g. game-saves) over player types or between CD-i emulator users.

To allow all of the above and clean up some dirty tricks that were needed for input playback and handling Quizard, I've done some internal restructuring of CD-i Emulator. In particular, I introduced a new "handler" class beneath the existing "device" and "memory" classes (which are now no longer derived from each other but from a common "component" base class). This restructuring isn't finished yet, but it will allow the input and Quizard stuff to become handlers instead of devices (the latter is improper because they shouldn't be visible on the CD-i system bus).

The new "module" class (a subclass of handler) will be used to add high-level emulation of OS-9 and CD-RTOS rom modules. I want to preserve the interfaces between the modules and the public data structures as much as possible, because it will allow a gradual transition from "real" to "emulated" modules.

To prepare for all of the above I had to do some fairly heavy design, which caused me to properly write down some of the design information and tradeoffs for the first time. This will be invaluable information for co-developers (if they ever materialize), hence the title "CD-i Emulator Cookbook". Well, at present it's more like a leaflet but I hope to expand it over time and also add some history.

Pieces of the cookbook will be added to the CD-i Emulator website if I feel they're ready.

I've also been giving some thought on a collaboration model for the ROM-less emulation. If there is interest I could do a partial source release that would allow other developers to work on the ROM-less emulation. This release would *not* contain any non-video chipset emulation but it would contain "generic" CD-i audio and video decoding. You would still need to use (part of) a system ROM (in particular the OS-9 kernel and file managers) until enough of the emulation is finished.

I'm still considering all this, but I wanted to get the word out to see if there is interest and to show that I haven't abandoned the project.

Potential co-developers should start boning up on OS-9 and CD-RTOS. All of the technical OS-9 documentation is online at ICDIA and links to the digital Green Book can also be found.
          February 2012 Daring Cooks' Challenge: Flipping Fried Patties!!!        
Hi it is Lisa and Audax and we are hosting this month's Daring Cooks' challenge we have chosen a basic kitchen recipe and a basic cooking technique which can be adapted to suit any ingredient that you have to hand and are beloved by children and adults alike … of course we are talking about patties.
Photobucket
Technically patties are flatten discs of ingredients held together by (added) binders (usually eggs, flour or breadcrumbs) usually coated in breadcrumbs (or  flour) then fried (and sometime baked). Burgers, rissoles, croquettes, fritters, and rösti are types of patties as well.

Irish chef Patrick "Patty" Seedhouse is said to have come up with the original concept and term as we know it today with his first production of burgers utilizing steamed meat pattys - the pattys were "packed and patted down" (and called pattys for short) in order to shape a flattened disc that would enflame with juices once steamed.

The binding of the ingredients in patties follows a couple of simple recipes (there is some overlap in the categories below)
Patties – patties are ingredients bound together and shaped as a disc.
Rissoles and croquettes – use egg with breadcrumbs as the binder, typical usage for 500 grams (1 lb) of filling ingredients is 1 egg with ½ cup of breadcrumbs (sometimes flour, cooked grains, nuts and bran can be used instead of the breadcrumbs). Some meat patties use no added binders in them they rely on the protein strands within the meat to bind the patty together.  Vegetarian and vegan patties may use mashed vegetables, mashed beans, grains, nuts and seeds to bind the patty. Generally croquettes are crumbed (breaded) patties which are  shallow- or deep-fried. Rissoles are not usually crumbed (but can be) and are pan- or shallow-fried. Most rissoles and croquettes can be baked.  (Examples are all-meat patties, hamburgers, meat rissoles, meatloaves, meatballs, tuna fish and rice patties, salmon and potato rissoles, most vegetable patties.)
Wet Fritters – use flour, eggs and milk as the binder, typical usage for 500 grams (1 lb) of filling ingredients is 2 cups flour, 1 egg with 1 cup of milk and are usually deep-fried and sometimes pan-fried  (examples deep fried apple fritters, potato fritters, some vegetable fritters, hushpuppies)
Dry Fritters – use eggs and (some) flour as the binder, typical usage for 500 grams  (1 lb) of filling ingredients is 1 to 2 eggs and (usually) some 2 to 8 tablespoons of flour (but sometimes no flour) and are pan- or shallow- fried. (examples most vegetable patties like zucchini fritters, Thai fish cakes, crab cakes, NZ whitebait fritters)
Röstis – use eggs (sometimes with a little flour) as the binder for the grated potato, carrot and other root vegetables, typical usage for 500 grams (1 lb) of filling ingredients is one egg yolk (potato rösti).

Sautéing, stir frying, pan frying, shallow frying, and deep frying use different amounts fat to cook the food. Sautéing uses the least amount of oil (a few teaspoons) while deep frying uses (many many cups) the most oil. The oil helps lubricate (sometimes adds flavour) the food being fried so it will not stick to the pan and helps transfer heat to the food being cooked.

In particular, as a form of cooking patties, pan- and shallow-frying relies on oil of the correct temperature to seal the surface (so retaining moisture) and to heat the interior ingredients (so binding them together) so cooking the patty. The exposed topside of the patty while cooking allows, unlike deep frying, some moisture loss and contact with the pan bottom with the patty creates greater browning on the contact surface that is the crust of the patty is browned and the interior is cooked by pan- and shallow-frying. Because the food is only being cooked on one side while being pan- or shallow-fried, the food must be flipped at least once to totally cook the patty.

So this month's challenge is to pan- or shallow-fry a patty, so giving us the title for this challenge “flipping fried patties”.

This challenge will help you understand how to form, what binders to use, and how to fry a patty so that it is cooked to picture perfect perfection.

Recipe Source:  Audax adapted a number of popular recipes to come up with the challenge patty recipes and Lisa has chosen to share two recipes – California Turkey Burger adapted from Cooking Light Magazine, and French Onion Salisbury Steak adapted from Cuisine at Home magazine.

Blog-checking lines:  The Daring Cooks’ February 2012 challenge was hosted by Audax & Lis and they chose to present Patties for their ease of construction, ingredients and deliciousness!  We were given several recipes, and learned the different types of binders and cooking methods to produce our own tasty patties!

Posting Date:  February 14th, 2012

Download the printable .pdf file HERE



Notes:
     
  • Binders
  •  
  • Eggs – are found in most patty recipes it acts as a binder, use cold eggs and lightly beat them before using  If you cannot use eggs try this tip  "1/4 cup of silken tofu, blended, or a commercial egg re-placer powder mixed with warm water."
  •  
  • Flour – normal plain (all-purpose) flour is used in most fritter recipes it can be replaced with rice, corn or potato flours (in smaller quantities) in some recipes. If you want some rise in your patties then use self-raising flour or add some baking powder to the flour. 
  •  
  • Breadcrumb Preparation – breadcrumbs are a common ingredient in patties, burgers and fritters they act as a binding agent, ensuring the patty keeps it shape during the cooking process.
  •  
    • Fresh breadcrumbs – these crumbs are made at home with stale bread simply remove the crusts from one- or two-day old bread, break bread into pieces, place pieces in a blender or food processor then blend or process until fine. Store any excess in a plastic bag in the freezer. 1 cup of fresh crumbs = 3 slices of bread.
    •  
    • Packaged breadcrumbs – often called dry breadcrumbs, these are used to make a crisp coating on the burgers, patties and fritters they are easily found in the supermarket, You can make them at home. Place slices of one- or two-day bread on baking trays, bake in the oven on the lowest setting until slices are crisp and pale brown. Cool bread, break pieces in a blender or food processor then blend or process until fine. 1 cup fine dry breadcrumbs = 4 slices of bread.
     
  • Alternate binders – bran (oat, wheat, rice, barley etc) can be used instead of breadcrumbs in most recipes. Tofu (silken) can replace the egg. Also using mashed potato (or sweet potato, carrots, most root vegetables) and/or mashed beans can help bind most patties. Of course chickpea flour and most other flours can be used to help bind patties. Seeds, nuts and grains can help bind a patty especially when the patty has cooled after cooking. These binders are used in vegan recipes.
  •  
  • Moisteners – Mayonnaise and other sauces, pesto and mustard are used in some meat patty recipes mainly for moisture and flavour but they can act as binders as well. For vegetable patties you can use chopped frozen spinach, shredded carrots, shredded zucchini, shredded apple and cooked grains to add extra moisture. Also sour cream and other milk products are used to increase the tenderness of patties.

     
  • Patty Perfection
  •  
  • When making meat patties the higher the fat content of the meat, the more the patties shrink during cooking this is especially true for ground (minced) red meat. Make patties larger than the bun they are to be served on to allow for shrinkage.
  •  
  • For hamburgers keep the fat content to about 20 - 30% (don't use lean meat) this ensures juicy patties when cooked. Also use coarse freshly ground meat (if possible) to make patties, if the mixture is ground too fine the large patties will break apart since the protein strands are too short and are covered in fat and can only bind to nearby ingredients so when the large patty is cooked it will fall apart or be too dense. Compare this behaviour with small amounts of finely ground lean meat (almost a paste) where the protein can adhere to itself (since the protein chains are short, not covered in fat and all the ingredients are nearby) hence forming a small stable patty (lamb kofta, Asian chicken balls, prawn balls).
  •  
  • Patty mixtures should be kept cold as possible when preparing them and kept cold until you  cook them the cold helps bind the ingredients together.
  •  
  • Don't over-mix the ingredients the resultant mixture will be heavy and dense.
  •  
  • For meat patties chop, mince, grate the vegetable ingredients fairly finely, if too coarse the patties will break apart.
  •  
  • Patties made mostly of meat (good quality hamburgers and rissoles) should be seasoned just before the cooking process, if salted too early liquid can be drawn out of the patty.
  •  
  • Make all the patties the same size so they will cook at the same rate. To get even-sized patties, use measuring cups or spoons to measure out your mixture.
  •  
  • For patties use your hands to combine the ingredients with the binders, mix gently until the mixture comes cleanly from the sides of the mixing bowl. Test that the final mixture forms a good patty (take a small amount in your palm and form into a ball it should hold together) before making the whole batch. Add extra liquid or dry binder as needed. Cook the test patty to check for seasoning, add extra if needed then cook the rest of the batch. 
  •  
  • Usually patties should be rested (about an hour) before cooking they “firm” up during this time, a good technique to use if your patty is soft. Always wrap patties they can dry out if left in the fridge uncovered.
  •  
  • Dampen your hands when shaping patties so the mixture won't stick to your fingers.
  •  
  • If making vegetable patties it is best to squeeze the grated/chopped/minced vegetables to remove any excess liquid this is most important for these types of patties.
  •  
  • When making fritters shred your vegetables because it makes long strands that gives a strong lattice for the patties. A food processor  or a box grater is great to use here.
  •  
  • For veggie patties make sure your ingredients are free of extra water. Drain and dry your beans or other ingredients thoroughly before mashing. You can even pat them gently dry with a kitchen cloth or paper towel.
  •  
  • Vegetable patties lack the fat of meat patties so oil the grill when BBQing them so the patty will not stick.
  •  
  • Oil all-meat burgers rather than oiling the barbecue or grill pan – this ensures the burgers don’t stick to the grill allowing them to sear well. If they sear well in the first few minutes of cooking they’ll be golden brown and juicy. To make it easy brush the burgers with a brush dipped in oil or easier still use a spray can of oil.
  •  
  • If you only have very lean ground beef try this tip from the Chicago Tribune newspaper  “To each 1 lb (½ kg) of ground beef add 2 tablespoons of cold water (with added salt and pepper) and 2 crushed ice cubs, form patties.” it really does work.
  •  
  • A panade, or mixture of bread crumbs and milk, will add moisture and tenderness to meat patties when the burgers are cooked well-done.
  •  
  • For vegetable patties it is best to focus on one main ingredient then add some interesting flavour notes to that major taste (examples carrot and caraway patties, beetroot, feta and chickpea fritters etc) this gives a much bolder flavour profile than a patty of mashed “mixed” vegetables which can be bland.
  •  
  • Most vegetable  and meat/vegetable patties just need a light coating of seasoned breadcrumbs. Lightly pat breadcrumbs onto the surface of the patty there is enough moisture and binders on the surface of the patty to bind the breadcrumbs to the patty while it is cooking. You can use wheatgerm, bran flakes, crushed breakfast cereals, nuts and seeds to coat the patty.
  •  
  • Use fine packet breadcrumbs as the coating if you want a fine smooth crust on your patties use coarser fresh breadcrumbs as the coating if you want a rougher crisper crust on your patty.
  •  
  • Flip patties once and only once, over-flipping the patty results in uneven cooking of the interior and allows the juices to escape.
  •  
  • Don't press the patties when they are cooking you'll squeeze out all of the succulent juices.
  •  
  • Rest patties a while before consuming.

     
  • Shaping the patty
  •  
  • Shaping – Shape the patty by pressing a ball of mixture with your clean hands it will form a disc shape which will crack and break up around the edges. What you want to do is press down in the middle and in from the sides, turning the patty  around in your hand until it is even and uniform. It should be a solid disc that is firm. Handle the mixture gently, use a light touch and don’t make them too compacted. Rather than a dense burger, which is difficult to cook well, aim for a loosely formed patty that holds together but is not too compressed.
  •  
  • Depressing the centre – When patties cook, they shrink (especially red meat burgers). As they shrink the edges tend to break apart causing deep cracks to form in the patty. To combat this you want the burger patty to be thinner in the middle than it is around the edges. Slightly depress the center of the patty to push a little extra mixture towards the edges. This will give you an even patty once it is cooked.  

     
  • Shallow- and pan-frying 
  •  
  • Preheat the pan or BBQ.
  •  
  • Generally when shallow-frying patties use enough oil that it comes halfway up the sides of the food. Best for most meat and vegetable patties and where the ingredients in the patty are uncooked.
  •  
  • Generally when pan-frying use enough oil to cover the surface of the pan best for most vegetable patties where all the ingredients are precooked (or cook very quickly) and all-meat rissoles and hamburgers.
  •  
  • Most oils are suitable for shallow- and pan-frying but butter is not it tends to burn. Butter can be used in combination with oil. Low-fat spreads cannot be used to shallow fry as they contain a high proportion of water. Rice bran oil is a great choice since it is almost tasteless and has a very high smoke point of 490°F/254°C. The smoke point is when the oil starts to break down into bitter fatty acids and produces a bluish smoke, Canola (smoke point 400°F/204°C) is also a great choice. Butter has a smoke point of 250–300°F/121–149°C. Olive oil Extra light 468°F/242°C. Olive oil Extra virgin 375°F/191°C. Ghee (Clarified Butter) 485°F/252°C.   
  •  
  • Do not overload the frying pan which allows steam to be trapped near the cooking food which might lead to the patties being steamed instead of fried. If you place too many patties at once into the preheated pan this reduces the heat and the patties will then release juices and begin to stew. Leave some space between each when you place them in the pan.
  •  
  • For most patties preheat the oil or fat until the oil seems to shimmer or a faint haze rises from it, but take care not to let it get so hot it smokes. If the oil is too cool before adding the patties, it will be absorbed by the food making the patty soggy. If the oil is too hot then the crumb coating will burn before the interior ingredients are cooked and/or warmed through. For vegetable and meat/vegetable patties start off cooking in a medium hot skillet and then reduce the heat to medium.  For all-meat patties start off cooking in a very hot skillet and then reduce the heat to hot, as celebrity chef Bobby Flay says that “the perfect [meat] burger should be a contrast in textures, which means a tender, juicy interior and a crusty, slightly charred exterior. This is achieved by cooking the meat [patty] directly over very hot heat, rather than the indirect method preferred for slow barbecues”. All patties should sizzle when they are placed onto the preheated pan.
  •  
  • Cast iron pans are best to fry patties.
  •  
  • When the raw patty hits the hot cooking surface it will stick. And will stay so until the patty crust forms so causing a non-stick surface on the patty at this point you can lift the patty easily without sticking. So wait until the patties (with a gentle shaking of the pan or a light finger-twist of the patty) release themselves naturally from the frying pan surface (maybe a minute or two for meat patties maybe 3-6 minutes for a vegetable patty).  If you try to flip it too early the burger will fall apart. The secret is to wait for the the patty to naturally release itself from the pan surface then flip it over once.
  •  
  • Veggie burgers will firm up significantly as they cool.
  •  
  • Most vegetable patties can be baked in the oven.
  •  
  • Check the temperature of the oil by placing a few breadcrumbs into the pan they should take 30 seconds to brown.
  •  
  • If you need to soak up excess oil place the patties on a rack to drain, do not place onto paper towels since steam will be trapped which can make the patty soggy, if you need to just press off the excess oil with paper towels then place onto a rack.



Mandatory Items: Make a batch of pan- or shallow-fried (or baked) patties.

Variations allowed:  Any variation on a patty is allowed. You can use the recipes provided or make your own recipe.

Preparation time:
Patties: Preparation time less than 60 minutes. Cooking time less than 20 minutes.

Equipment required:
Large mixing bowl
Large stirring spoon
Measuring cup
Frying pan

Basic Canned Fish and Rice Patties


Servings: makes about ten ½ cup  patties
Recipe can be doubled
adapted from http://www.taste.com.au/recipes/17181/tuna+rissoles

This is one my favourite patty recipes I make it once a week during the holidays. It is most important that you really mix and mash the patty ingredients well since the slightly mashed rice helps bind the patty together. 

Ingredients:
1 can (415 gm/15 oz) pink salmon or tuna or sardines, (not packed in oil) drained well
1 can (340 gm/13 oz) corn kernels, drained well
1 bunch spinach, cooked, chopped & squeezed dry or 60 gm/2 oz thawed frozen spinach squeezed dry
2 cups (300 gm/7 oz) cooked white rice (made from 2/3 cups of uncooked rice)
1 large egg, lightly beaten
about 3 tablespoons (20 gm/2/3 oz) fine packet breadcrumbs for binding
3 tablespoons (45 ml) oil, for frying
2 spring (green) onions, finely chopped
1 tablespoon (15 ml) tomato paste or 1 tablespoon (15 ml) hot chilli sauce
1 tablespoon (15 ml) oyster sauce
2 tablespoons (30 ml) sweet chilli sauce
Salt and pepper to taste
½ cup (60 gm/2 oz) seasoned fine packet bread crumbs to cover patties

Directions:
1) Place all of the ingredients into a large bowl.
2) Mix and mash using your hands or a strong spoon the ingredients with much force (while slowly adding tablespoons of breadcrumbs to the patty mixture) until the mixture starts to cling to itself about 4 minutes the longer you mix and mash the more compacted the final patty.  Day-old cold rice works best (only needs a tablespoon of breadcrumbs or less) but if the rice is hot or warm you will need more breadcrumbs to bind the mixture. Test the mixture by forming a small ball it should hold together. Cook the test ball adjust the seasoning (salt and pepper) of the mixture to taste.   
3) Form patties using a ½ cup measuring cup.
4) Cover in seasoned breadcrumbs.
5) Use immediately or can be refrigerated covered for a few hours.
6) Preheat fry pan (cast iron is best) to medium hot add 1½ tablespoons of oil and heat until the oil shimmers place the patties well spaced out onto the fry pan lower heat to medium.
7) Pan fry for about 3 minutes each side for a thin lightly browned crust about 10 minutes for a darker thicker crisper crust. Wait until the patties can be released from the pan with a shake of the pan or a light turning of the patty using your fingers before flipping over to cook the other side of the patty add the remaining 1½ tablespoons of oil when you flip the patties. Flip only once. You can fry the sides of the patty if you want brown sides on your patty.

Pictorial Guide
Some of the ingredients
Photobucket

Starting to mix the patty mixture           
Photobucket

About ready to be tested
Photobucket

The test ball to check if the mixture will hold together
Photobucket

Form patties using a ½ cup measuring cup
Photobucket

Crumb (bread) the patties                   
Photobucket

Cover and refrigerate


Preheat frying pan add oil wait until the oil shimmers add patties well spaced out onto the pan
Photobucket

Wait until the patties can be released by a light shaking of the pan or by finger-turning the patty and then flip the patties over add some extra oil (these were fried for 10 minutes)
Photobucket

Enjoy picture perfect patties
Photobucket

This patty was pan-fried on my cast iron fry pan notice the shiny very crisp crust as compared to the patty above
Photobucket

Zucchini, prosciutto & cheese fritters


Servings: makes about 8-10 two inch (five cm) fritters
Recipe can be doubled
adapted from http://smittenkitchen.com/2011/08/zucchini-fritters/

This makes a great light lunch or a lovely side dish for dinner. 

Ingredients:
500 gm (½ lb) zucchini (two medium)
1 teaspoon (5 ml) (7 gm) salt
½ cup (120 ml) (60 g/2 oz) grated cheese, a strong bitty cheese is best
5 slices (30 gm/1 oz) prosciutto, cut into small pieces
½ cup (120 ml) (70 gm/2½ oz) all-purpose (plain) flour plus ½ teaspoon baking powder, sifted together
2 large eggs, lightly beaten
2 spring onions, finely chopped
1 tablespoon (15 ml) chilli paste
1 teaspoon (5 ml) (3 gm) black pepper, freshly cracked
2 tablespoons (30 ml) oil, for frying

Directions:
     
  • Grate the zucchini with a box grater or food processor. Place into large bowl, add salt, wait 10 minutes.
  •  
  • While waiting for the zucchini, pan fry the prosciutto pieces until cooked. Remove from pan and place prosciutto onto rack this will crisp up the prosciutto when it cools. Paper towels tend to make prosciutto soggy if left on them.
  •  
  • When zucchini is ready wrap in a cloth and squeeze dry with as much force as you can you will get a lot of liquid over ½ cup, discard liquid it will be too salty to use.
  •  
  • Return dried zucchini to bowl add prosciutto, cheese, pepper, sifted flour and baking powder, chilli paste, pepper, a little salt and the lightly beaten eggs.
  •  
  • Mix until combined if the batter is too thick you can add water or milk or another egg, if too wet add some more flour. It should be thick and should not flow when placed onto the frying pan.
  •  
  • Preheat a frying pan (cast iron is best) until medium hot, add 1/3 of the oil wait until it shimmers.
  •  
  • Place dollops of batter (about 2 tablespoons each) onto the fry pan widely spaced out, with the back of a spoon smooth out each dollop to about 2 inches (5 cm) wide, do not make the fritters too thick. You should get three or four fritters in the average-sized fry pan. Lower heat to medium
  •  
  • Fry for 3-4 minutes the first side, flip, then fry the other side about 2-3 minutes until golden brown.  Repeat for the remaining batter. Adding extra oil as needed.
  •  
  • Place cooked fritters into a moderate oven on a baking dish for 10 minutes if you want extra crispy fritters.


Pictures of process – fresh zucchini, grated zucchini, liquid released from salted and squeezed dry zucchini, ingredients for the fritters, fritter batter and frying the fritters.
Photobucket

Cooked fritters
Photobucket

California Turkey Burger


Servings: makes about 10 burgers
Recipe can be doubled
adapted from Cooking Light Magazine September 2005:
http://www.myrecipes.com/recipe/california-burgers-10000001097016/

Sauce:
½ cup (120 ml) ketchup
1 tablespoon (15 ml) Dijon mustard
1 tablespoon (15 ml) fat-free mayonnaise

Patties:
½ cup (120 ml) (60 gm/2 oz) finely chopped shallots
¼ cup (60 ml) (30 gm/1 oz) dry breadcrumbs
1 teaspoon (5 ml) (6 gm) salt
1 teaspoon (5 ml) Worcestershire sauce
¼ teaspoon (¾ gm) freshly ground black pepper
3 garlic cloves, minced
1¼ lbs (600 gm) ground turkey
1¼ lbs (600 gm) ground turkey breast
Cooking spray

Remaining ingredients:
10 (2-ounce/60 gm) hamburger buns
10 red leaf lettuce leaves
20 bread-and-butter pickles
10 (1/4-inch thick/5 mm thick) slices red onion, separated into rings
2 peeled avocados, each cut into 10 slices
3 cups (750 ml) (60 gm/2 oz) alfalfa sprouts

Directions:
1. Prepare the grill to medium-high heat.
2. To prepare sauce, combine first 3 ingredients; set aside.
3. To prepare patties, combine shallots and the next 7 ingredients (through turkey breast), mixing well. Divide mixture into 10 equal portions, shaping each into a 1/2-inch-thick (1¼ cm thick) patty. Place patties on grill rack coated with cooking spray; grill 4 minutes on each side or until done.
4. Spread 1 tablespoon sauce on top half of each bun. Layer bottom half of each bun with 1 lettuce leaf, 1 patty, 2 pickles, 1 onion slice, 2 avocado slices, and about 1/3 cup of sprouts. Cover with top halves of buns.                                                                                                         

Photobucket

Yield:  10 servings (serving size: 1 burger) - Nutritional Information – CALORIES 384(29% from fat); FAT 12.4g (sat 2.6g,mono 5.1g,poly 2.8g); PROTEIN 31.4g; CHOLESTEROL 68mg; CALCIUM 94mg; SODIUM 828mg; FIBER 3.9g; IRON 4mg; CARBOHYDRATE 37.5g
Lisa’s Notes:
Nutritional information provided above is correct for the recipe as written.  When I make these burgers, the only ingredients I change are using regular mayo, and dill pickles.  My red lettuce of choice is radicchio.  I’ve both grilled and pan fried these burgers and both are delicious.  If you decide to pan fry, you’ll need a little extra fat in the pan – so use about 2 tsp. of extra virgin olive oil, or canola oil before laying your patties on the pan.  Cook for approximately 5 minutes on each side, or until done.  Do not overcook as the patties will dry out and not be as juicy and tasty! :)

French Onion Salisbury Steak


Courtesy of Cuisine at Home April 2005 edition
Makes 4 Steaks; Total Time: 45 Minutes

Ingredients:
1 1/4 lb (600 gm) ground chuck 
1/4 cup (60 ml) (30 gm/1 oz) fresh parsley, minced
2 tablespoons (30 ml) (⅓ oz/10 gm) scallion (spring onions), minced
1 teaspoon (5ml) (3 gm) kosher salt or ½ teaspoon (2½ ml) (3 gm) table salt
1/2 teaspoon (2½ ml) (1½ gm) black pepper
2 tablespoons (30 ml) (½ oz/18 gm) all-purpose (plain) flour
2 tablespoons (30 ml) olive oil
2 cups (240 ml) (140 gm/5 oz) onions, sliced
1 teaspoon (5 ml) (4 gm) sugar
1 tablespoon (15 ml) (⅓ oz/10 gm) garlic, minced
1 tablespoon (15 ml) (½ oz/15 gm) tomato paste
2 cups (240 ml) beef broth
1/4 cup (60 ml) dry red wine
3/4 teaspoon (2 gm) kosher salt or a little less than ½ teaspoon (2 gm) table salt
1/2 teaspoon  (2½ ml) (1½ gm) dried thyme leaves
4 teaspoons (20 ml) (⅓ oz/10 gm) fresh parsley, minced
4 teaspoons (20 ml)  (2/3 oz/20 gm) Parmesan cheese, shredded

Cheese Toasts
4 slices French bread or baguette, cut diagonally (1/2" thick) (15 mm thick)
2 tablespoons (30 ml) (30 ml/1 oz) unsalted butter, softened
1/2 teaspoon (2½ ml) (2 gm) garlic, minced
Pinch of paprika
1/4 cup (60 ml) (30 gm/1 oz) Swiss cheese, grated (I used 4 Italian cheese blend, shredded)
1 tablespoon (15 ml) (⅓ oz/10 gm) Parmesan cheese, grated

Directions:
1. Combine chuck, parsley, scallion, salt and pepper. Divide evenly into 4 portions and shape each into 3/4"-1" (20-25 mm) thick oval patties. Place 2 tablespoons flour in a shallow dish; dredge each patty in flour. Reserve 1 teaspoon flour.
2. Heat 1 tablespoon oil in a sauté pan over medium-high heat. Add patties and sauté 3 minutes on each side, or until browned. Remove from pan.
3. Add onions and sugar to pan; sauté 5 minutes. Stir in garlic and tomato paste; sauté 1 minute, or until paste begins to brown. Sprinkle onions with reserved flour; cook 1 minute. Stir in broth and wine, then add the salt and thyme.
4. Return meat to pan and bring soup to a boil. Reduce heat to medium-low, cover and simmer 20 minutes.
5. Serve steaks on Cheese Toasts with onion soup ladled over. Garnish with parsley and Parmesan.

For the Cheese Toasts
6. Preheat oven to moderately hot 200°/400ºF/gas mark 6.
7. Place bread on baking sheet.
8. Combine butter, garlic and paprika and spread on one side of each slice of bread. Combine cheeses and sprinkle evenly over butter. Bake until bread is crisp and cheese is bubbly, 10-15 minutes.

French Onion Salisbury Steak
Photobucket

Potato Rösti


Servings: makes two large rösti
adapted from a family recipe

The classic rösti; cheap, easy and so tasty.

Ingredients:
1 kg (2½ lb) potatoes
1 teaspoon (5 ml) (6 gm) salt
2 teaspoons (10 ml) (6 gm) black pepper, freshly milled
1 large egg, lightly beaten
2 tablespoons (30 ml) (½ oz/15 gm) cornflour (cornstarch) or use all-propose flour
3 tablespoons (45 ml) oil, for frying

Directions:
     
  1. Grate lengthwise the peeled potatoes with a box grater or a food processor.
  2.  
  3. Wrap the grated potato in a cloth and squeeze dry, you will get a lot of liquid over ½ cup, discard liquid since it is full of potato starch.
  4.  
  5. Return dried potato to bowl add the egg, cornflour, pepper, and salt.
  6.  
  7. Mix until combined.
  8.  
  9. Preheat a frying pan (cast iron is best) until medium hot, add 2 teaspoons of oil wait until oil shimmers.
  10.  
  11. Place half of mixture into the pan, flatten with a spoon until you get a smooth flat surface. Lower heat to medium.
  12.  
  13. Fry for 8-10 minutes (check at 6 minutes) the first side, flip by sliding the rösti onto a plate then use another plate invert the rösti then slide it back into the pan, then fry the other side about 6-8 minutes until golden brown. Repeat to make another rösti


Pictures of process – Peel 1 kg spuds, grate lengthwise, squeeze dry, add 1 egg, 2 tablespoons starch, salt and pepper. Pan fry.
Photobucket

Pictures of the grated potato before (left) and after (right) squeezing dry. Notice in the left hand pictures the gratings are covered in moisture and starch, while in the right hand pictures the grated potato is dry and doesn't stick together.
Photobucket

Pictures of the finished small rösti
Photobucket

Pictures of the large rösti
Photobucket

Chicken, potato and corn patties
I had some leftover chicken legs and boiled potatoes from dinner last night so I made up some patties. The patties are made from 1 kilogram of finely grated cold boiled potatoes, 4 chicken legs meat removed and finely chopped, and one can of corn kernels. The binder was one egg and 1/4 cup of self-raising wholewheat flour.

The crumbed (breaded) patties waiting to be pan fried
Photobucket

Patties pan frying
Photobucket

The finished patties
Photobucket
Photobucket

Meatballs
Photobucket
Photobucket
Photobucket

I made meatballs using high quality ground veal and pork (30% fat) I didn't use any binders in the mixture just a little seasoning chilli, garlic and dried mushroom powder.

The meatballs waiting to be fried
Photobucket

Frying the meatballs
Photobucket

The finished meatballs
Photobucket

Of course I made spaghetti and meatballs for dinner so so delicious
Photobucket

Thai Fish Cakes
Photobucket
Photobucket

I adore Thai fish cakes but I have never really made them I was surprised how simple it is if you have a very strong food processor. Basically you make a paste from 1/2 kg (1 lb) of white fillet fish (I used catfish (basa) fillets) with 1 egg and 6 tablespoons of flavourings (a combination of 1 Tbsp fish sauce, 1 tsp chilli, 2 Tbsp red curry paste, 1 Tbsp coconut cream, 1 Tbsp chilli crab flakes, 1/2 tsp sugar, 1/2 tsp salt, 1/2 tsp shrimp paste, a few spices), 6 kaffir lime leaves and 2 tablespoons cornflour (cornstarch) with a teaspoon of baking powder, you form small patties (each 2 tablespoons) from the paste and pan fry until cooked. These are just as good as the cafe ones I buy and only cost about 30 cents each instead of $1.90 at the cafe. A good basic recipe for Thai fish cakes is here http://thaifood.about.com/od/thaiseafoodrecipes/r/classicfishcakes.htm I added some extra baking powder and cornflour to the basic recipe since it makes the cakes rise and the interiors are light and fluffy. Super tasty and so cute.

Photobucket

Storage & Freezing Instructions/Tips:
Most rissoles, croquettes and dry fritters keep well for three or four days if covered and kept in the fridge. Uncooked and cooked rissoles and croquettes can be frozen for at least one month.

Additional Information: 
An index of Aussie patty recipes http://www.taste.com.au/search-recipes/?q=patties&publication=
An index of Aussie rissole recipes http://www.taste.com.au/search-recipes/?q=rissoles&publication=
An index of American patty recipes http://allrecipes.com/Search/Recipes.aspx?WithTerm=patty%20-peppermint%20-dressing&SearchIn=All&SortBy=Relevance&Direction=Descending
An index of American burger recipes http://busycooks.about.com/cs/easyentrees/a/burgers.htm 
A great vegetable and chickpea recipe http://www.exclusivelyfood.com.au/2006/06/vegetable-and-chickpea-patties-recipe.html
A baked vegetable patty recipe http://patternscolorsdesign.wordpress.com/2011/02/20/baked-vegetable-patties/
Vegetable patty recipes http://www.divinedinnerparty.com/veggie-burger-recipe.html
Best ever beet(root) and bean patty http://www.thekitchn.com/restaurant-reproduction-bestev-96967
Ultimate veggie burgers http://ask.metafilter.com/69336/How-to-make-awesome-veggie-burgers
One of best zucchini fritter recipes http://smittenkitchen.com/2011/08/zucchini-fritters/ 
Old School Meat rissoles http://www.exclusivelyfood.com.au/2008/07/rissoles-recipe.html
How to form a patty video http://www.youtube.com/watch?v=iHutN-u6jZc
Top 12 vegetable patty recipes http://vegetarian.about.com/od/veggieburgerrecipes/tp/bestburgers.htm
Ultimate Meat Patties Video http://www.chow.com/videos/show/youre-doing-it-all-wrong/55028/how-to-make-a-burger-with-hubert-keller
Beautiful vegetable fritters so pretty http://helengraves.co.uk/tag/beetroot-feta-and-chickpea-fritters-recipe/   
Information about veggie patties http://kblog.lunchboxbunch.com/2011/08/veggie-burger-test-kitchen-and-lemon.html  

Disclaimer:
The Daring Kitchen and its members in no way suggest we are medical professionals and therefore are NOT responsible for any error in reporting of “alternate baking/cooking”.  If you have issues with digesting gluten, then it is YOUR responsibility to research the ingredient before using it.  If you have allergies, it is YOUR responsibility to make sure any ingredient in a recipe will not adversely affect you. If you are lactose intolerant, it is YOUR responsibility to make sure any ingredient in a recipe will not adversely affect you. If you are vegetarian or vegan, it is YOUR responsibility to make sure any ingredient in a recipe will not adversely affect you. The responsibility is YOURS regardless of what health issue you’re dealing with. Please consult your physician with any questions before using an ingredient you are not familiar with.  Thank you! :)
          Ingénieur Linux embarqué H/F - ALTIM Consulting - Boulogne-Billancourt        
Qui sommes-nous ? ALTIM, Cabinet d'expertise en forte croissance dans le développement de logiciels embarqués, secteurs TV numérique, Automobile et Santé, recherche des Ingénieurs Linux embarqué (H/F). Qui êtes-vous ? De formation Bac +5 (Universitaire ou Ecole d'ingénieur), vous justifiez d'une ou plusieurs expériences significatives sur des projets intégrant du soft embarqué (C/C++), drivers linux, kernel android, java embarqué et/ou yocto. Ce que nous pouvons...
          Re: [PATCH 1078/1285] Replace numeric parameter like 0444 with macro (no replies)        
On Tue, 2 Aug 2016 20:14:26 +0800
Baole Ni <baolex.ni@intel.com> wrote:

> I find that the developers often just specified the numeric value
> when calling a macro which is defined with a parameter for access permission.
> As we know, these numeric value for access permission have had the corresponding macro,
> and that using macro can improve the robustness and readability of the code,
> thus, I suggest replacing the numeric parameter with the macro.
>

NACK!

I find 0444 more readable than S_IRUSR | S_IRGRP | S_IROTH.

-- Steve

> Signed-off-by: Chuansheng Liu <chuansheng.liu@intel.com>
> Signed-off-by: Baole Ni <baolex.ni@intel.com>
> ---
> kernel/workqueue.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index e1c0e99..74d92b0 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -284,11 +284,11 @@ static cpumask_var_t *wq_numa_possible_cpumask;
> /* possible CPUs of each node */
>
> static bool wq_disable_numa;
> -module_param_named(disable_numa, wq_disable_numa, bool, 0444);
> +module_param_named(disable_numa, wq_disable_numa, bool, S_IRUSR | S_IRGRP | S_IROTH);
>
> /* see the comment above the definition of WQ_POWER_EFFICIENT */
> static bool wq_power_efficient = IS_ENABLED(CONFIG_WQ_POWER_EFFICIENT_DEFAULT);
> -module_param_named(power_efficient, wq_power_efficient, bool, 0444);
> +module_param_named(power_efficient, wq_power_efficient, bool, S_IRUSR | S_IRGRP | S_IROTH);
>
> static bool wq_numa_enabled; /* unbound NUMA affinity enabled */
>
> @@ -317,7 +317,7 @@ static bool wq_debug_force_rr_cpu = true;
> #else
> static bool wq_debug_force_rr_cpu = false;
> #endif
> -module_param_named(debug_force_rr_cpu, wq_debug_force_rr_cpu, bool, 0644);
> +module_param_named(debug_force_rr_cpu, wq_debug_force_rr_cpu, bool, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
>
> /* the per-cpu worker pools */
> static DEFINE_PER_CPU_SHARED_ALIGNED(struct worker_pool [NR_STD_WORKER_POOLS], cpu_worker_pools);
> @@ -5423,7 +5423,7 @@ static const struct kernel_param_ops wq_watchdog_thresh_ops = {
> };
>
> module_param_cb(watchdog_thresh, &wq_watchdog_thresh_ops, &wq_watchdog_thresh,
> - 0644);
> + S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);
>
> static void wq_watchdog_init(void)
> {
          Re: [PATCH 1076/1285] Replace numeric parameter like 0444 with macro (no replies)        
On Tue, 2 Aug 2016 20:14:16 +0800
Baole Ni <baolex.ni@intel.com> wrote:

> I find that the developers often just specified the numeric value
> when calling a macro which is defined with a parameter for access permission.
> As we know, these numeric value for access permission have had the corresponding macro,
> and that using macro can improve the robustness and readability of the code,
> thus, I suggest replacing the numeric parameter with the macro.
>

NACK!

I find 0444 more readable than S_IRUSR | S_IRGRP | S_IROTH.

-- Steve

> Signed-off-by: Chuansheng Liu <chuansheng.liu@intel.com>
> Signed-off-by: Baole Ni <baolex.ni@intel.com>
> ---
> kernel/time/sched_clock.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
> index a26036d..9d072c3 100644
> --- a/kernel/time/sched_clock.c
> +++ b/kernel/time/sched_clock.c
> @@ -71,7 +71,7 @@ struct clock_data {
> static struct hrtimer sched_clock_timer;
> static int irqtime = -1;
>
> -core_param(irqtime, irqtime, int, 0400);
> +core_param(irqtime, irqtime, int, S_IRUSR);
>
> static u64 notrace jiffy_sched_clock_read(void)
> {
          Re: [PATCH 1070/1285] Replace numeric parameter like 0444 with macro (no replies)        
On Tue, 2 Aug 2016 20:13:43 +0800
Baole Ni <baolex.ni@intel.com> wrote:

> I find that the developers often just specified the numeric value
> when calling a macro which is defined with a parameter for access permission.
> As we know, these numeric value for access permission have had the corresponding macro,
> and that using macro can improve the robustness and readability of the code,
> thus, I suggest replacing the numeric parameter with the macro.

NACK!

I find 0444 more readable than S_IRUSR | S_IRGRP | S_IROTH.

-- Steve

>
> Signed-off-by: Chuansheng Liu <chuansheng.liu@intel.com>
> Signed-off-by: Baole Ni <baolex.ni@intel.com>
> ---
> kernel/rcu/rcuperf.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/rcu/rcuperf.c b/kernel/rcu/rcuperf.c
> index 3cee0d8..3812e93 100644
> --- a/kernel/rcu/rcuperf.c
> +++ b/kernel/rcu/rcuperf.c
> @@ -66,7 +66,7 @@ torture_param(bool, shutdown, false, "Shutdown at end of performance tests.");
> torture_param(bool, verbose, true, "Enable verbose debugging printk()s");
>
> static char *perf_type = "rcu";
> -module_param(perf_type, charp, 0444);
> +module_param(perf_type, charp, S_IRUSR | S_IRGRP | S_IROTH);
> MODULE_PARM_DESC(perf_type, "Type of RCU to performance-test (rcu, rcu_bh, ...)");
>
> static int nrealreaders;
> @@ -102,7 +102,7 @@ static int rcu_perf_writer_state;
> #define RCUPERF_RUNNABLE_INIT 0
> #endif
> static int perf_runnable = RCUPERF_RUNNABLE_INIT;
> -module_param(perf_runnable, int, 0444);
> +module_param(perf_runnable, int, S_IRUSR | S_IRGRP | S_IROTH);
> MODULE_PARM_DESC(perf_runnable, "Start rcuperf at boot");
>
> /*
          Malala Yousafzai and the Missing Brown Savior Complex        
On October 9, 2012, a Taliban gunman accosted a bus carrying 15 year-old Malala Yousafzai and her schoolmates, and coldly shot them at close range. The Tehrik-e-Taliban Pakistan not only claimed responsibility for the blatant assassination attempt of the teenage education activist, but as it emerged that Malala would survive the attack, the movement also reiterated its desire to kill her. Miraculously through the efforts of friends and family, the local community in Swat Valley where she is from and where she was shot, and the Pakistani army that airlifted her to Peshawar, Malala Yousafzai survived (as did the other victims). Given the seriousness of her condition, it was imperative she was treated by the best doctors, and a generous gesture by the Crown Prince of Abu Dhabi allowed her to be flown by air ambulance to England for major surgery. Fast forward just one year later, Malala has recovered and is even more emphatic in her message against the Taliban, promoting the empowerment of young women like her across Pakistan, and all around the world. And expectedly, the global media, including The Daily Show's Jon Stewart, have been celebrating her courage (perhaps caught in the moment of it all).

Great story, right? And what could be wrong about the alleged 'overexposure' of a young girl expressing words of peace and fighting for girls' education against a religious patriarchy? Apparently a lot. In fact, in Pakistan and in her hometown, her global coronation is treated with derision: "Malala is spoiling Pakistan's name around the world." Others have more sinister accusations of a CIA conspiracy involving both Malala and the gunman, claiming the entire affair is a Western plot. Yet, in recent days, an article written by a blogger in July on Huffington Post has been making the rounds on social media, entitled, "Malala Yousafzai and the White Saviour Complex." It argues, "Please, spare us the self-righteous and self-congratulatory message that is nothing more than propaganda that tells us that the West drops bombs to save girls like Malala."

The truth is there is no white savior coming for Pakistan or for any Muslim country, the vast majority of which are characterised by pernicious politics, inequitable economics, and irrational intolerance. Lecturing the chattering classes about geopolitical realties and distributing treatises on Western imperialism won't change anything. Fundamentally it will only be the indigenous leadership - helped or not helped by outsiders - that will drive change. Yet, when leaders do emerge, it seems that the local media (and now social media) are pre-occupied with tearing them down rather than building them up. People instead squander their energy on misguided diatribes, as the case of Malala has unfortunately shown. The real reason that the 'white savior complex' even is relevant is that we fail to champion the very 'brown saviors' in our midst.

Malala Yousafzai was thrust into the spotlight after her initial attack, which was so jarring that all Pakistani leaders came out in strong condemnation. Then Pakistani President, Asif Ali Zardari - himself a questionable character to say the least - labelling the attack as one against "all civilized people." Prior to the attack, Malala had rose to prominence as an activist, encouraged by her father, for girls education and against the policies and values of the Taliban, which was why she was targeted in the first place. Without picking up a gun, her message was considered a threat to their movement, which is amazing in it of itself. Yet, it was on July 12 earlier this year, speaking on her birthday to the United Nations that Malala brought tears to the eyes of millions of people around the world. Having remarkably recovered from her wounds (and having undergone partial facial reconstruction), and still facing death threats, Malala stood steadfast in front of a global audience, and spoke with fortitude and confidence: "The terrorists thought that they would change my aims and stop my ambitions but nothing changed in my life, except this: weakness, fear and hopelessness died. Strength, power and courage was born."

It was such a powerful moment, that almost every international news outlet carried the speech of this young woman live across the world. And for the first time in a long time, the Pakistani and Muslim in the spotlight was not an extremist but someone standing up to extremism. The plaudits continued to come, especially in the last few weeks, as Malala released a book about her experience and was awarded the prestigious Sakharov Prize from the European Union. In fact, she was the rumored favorite for the Nobel Peace Prize, which in the end was awarded to the Organization for the Prohibition of Chemical Weapons, in a surprise but perhaps deserving win. Of course, the Western media in particular have a penchant for over-hyping (if not over-milking) and over-sensationalizing such stories of heroism. And it will be very difficult for Malala to not only live up to such hype but also to prevent the perception that she is over-shadowing other deserving heroes. Yet, is that not the story of all figures of change who inspire us? Was Nelson Mandela really the only Black leader in South Africa's prisons? Was Martin Luther King Jr. the only individual marching in the South? Was Aung San Suu Kyi the only fighter for freedom in Burma?

It does seem increasingly, however, that Malala is a leader denied a strong constituency back home. It is easy to dismiss the allegations that she is a CIA agent - although the photo-op with the Obama's won't help - as well as the gloating of Taliban supporters after she was not awarded the Nobel Prize. Yet it is harder to dismiss the cacophony of criticism in Pakistan, in Swat Valley, and on the social media pages of Pakistanis, and for that matter, Muslims from around the world. As one government official said: "Everyone knows about Malala, but they do not want to affiliate with her." The primary complaints include the following:
  • This is another example of the West trying to portray themselves as a savior of the East. 
  • Malala is a secular heroine not a Muslim heroine. 
  • While her case is tragic there are other victims who deserve prominence. 
  • The crimes of the West through drones and in Iraq and Afghanistan, far outweigh the crimes of the Taliban. 
  • This is an effort of the West to try to avoid its own complicity in the situation in Pakistan that led to Malala's shooting. 
As with most disinformation campaigns, this one is based on kernels of truth. For starters, the world does neglect the stories of deserving others. One such example would be of the tour-de-force Pakistani social worker  Parveen Rehman who was shot dead in Karachi earlier this year. Additionally, it has been the Western media that has largely driven the popular support for Malala globally; that, however, has to be attributed to the dismal failure of the Pakistani media to not do so instead (in my humble opinion). Finally, and the most valid critique is that the story of Malala should not negate the very pivotal role the United States and the West has played and continues to play in creating the current perilous conditions in Pakistan and in contributing to the deaths of innocents there, and in other countries. 

Firstly, U.S. policy has been heavily involved in the rise of the Taliban in Pakistan, which it tacitly supported alongside Saudi Arabia and Pakistan's intelligence service in the mid-1990s. Moreover, the United States and Saudi Arabia (and some other Western and Muslim powers) cooperated to support radical jihadism (even printing textbooks to that effect for Afghanistan) and Islamism as a bulwark against the Soviet Union and communism. In fact, Israel also supported the radical group Hamas as a counterweight to the secular Fatah movement of then Palestinian leader Yasser Arafat. Yes, the world was and is screwed up, and the powers of the world have much complicity in that. 

Secondly, and more importantly, the military operations carried out by the U.S. in particular in Pakistan, Afghanistan, and Iraq have led to thousands of deaths of innocent people in recent years. These actions have largely gone unpunished and the victims have been forgotten. Certainly it is not just the Taliban that are killing and the world cannot dispense justice selectively. 

Does saying all of that make Malala Yousafzai any less of a hero (or heroine)? Is her courage dimmed by the crimes of others? Is her movement for the empowerment of young girls in Pakistan any less important? Of course not. Criticisms of the West will bring no one closer to emancipation. And it cannot mask the very pure fact that today's purveyors of disaster and death in the world also include Muslims.

Who bombed the church in Peshawar slaughtering 85 worshippers? Who attacked Westgate Mall in Nairobi killing dozens of innocents? Who murders dozens of men, women and children in Iraq every week? When a Muslim rises up - a so-called brown savior - to fight such crimes and the movements behind them, we should put him or her on our shoulders and not try to chase that person into the darkness. There is no shame in admitting Brown and Muslim guilt in the world's crimes, and it does not negate the wider reality and context around the violence that does occur. In fact, our fear of partial guilt in particular should not misguidedly cause us to throw out the very sparse examples of (counter-) leadership in Muslim countries that emerge and strike fear in the heart of radical extremists. 

It has become far too easy on all sides to blame the other rather than introspect inward. Above all, instead of blaming the West for its 'white savior complex' maybe it's time to develop our own brown savior complex to save ourselves from ourselves. 



          SILK MIST - $4.00        
LUXURY COCONUT SILK MIST is alcohol free and can be used on dry or wet hair providing weightless, instant shine to any style. This superfine mist provides frizz and flyaway control without weighing your hair down leaving a natural oil free shine. INGREDIENTS... fragrance, water,kernel oil,argon oil,water,vitamin b5,vitamin e, vitamin c,vitamin b3, vitamin h avocado oil,coconut oil,isoparaffin. DIRECTIONS... Hold bottle 5-6 inches from hair, spray lightly and evenly, comb through and style.
          Another pygrub adventure        
So the bug from the article Get Centos 7 DomU guests booting on Xen 4.1 hit us again, after a while as we wanted to reboot a few guests. Everything still seemed to be fine within the guest, nevertheless pygrub on host wasn’t able to find a kernel to boot. Revisiting one of the old […]
          Shark 3.x – Continuous Integration        
Taken from the SHARK website: SHARK is a modular C++ library for the design and optimization of adaptive systems. It provides methods for linear and nonlinear optimization, in particular evolutionary and gradient-based algorithms, kernel-based learning algorithms and neural networks, and various other machine learning techniques. SHARK serves as a toolbox to support real world applications […]
          Crinkler secrets, 4k intro executable compressor at its best        
(Edit 5 Jan 2011: New Compression results section and small crinkler x86 decompressor analysis)

If you are not familiar with 4k intros, you may wonder how things are organized at the executable level to achieve this kind of packing-performance. Probably the most important and essential aspect of 4k-64k intros is the compressor, and surprisingly, 4k intros have been well equipped for the past five years, as Crinkler is the best compressor developed so far for this category. It has been created by Blueberry (Loonies) and Mentor (tbc), two of the greatest demomakers around.

Last year, I started to learn a bit more about the compression technique used in Crinkler. It started from some pouet's comments that intrigued me, like "crinkler needs several hundred of mega-bytes to compress/decompress a 4k intros" (wow) or "when you want to compress an executable, It can take hours, depending on the compressor parameters"... I observed also bad comrpession result, while trying to convert some part of C++ code to asm code using crinkler... With this silly question, I realized that in order to achieve better compression ratio, you better need a code that is comrpession friendly but is not necessarily smaller. Or in other term, the smaller asm code is not always the best candidate for better compression under crinkler... so right, I needed to understand how crinkler was working in order to code crinkler-friendly code...

I just had a basic knowledge about compression, probably the last book I bought about compression was more than 15 years ago to make a presentation about jpeg compression for a physics courses (that was a way to talk about computer related things in a non-computer course!)... I remember that I didn't go further in the book, and stopped just before arithmetic encoding. Too bad, that's exactly one part of crinkler's compression technique, and has been widely used for the past few years (and studied for the past 40 years!), especially in compressors like H.264!

So wow, It took me a substantial amount of time to jump again on the compressor's train and to read all those complicated-statistical articles to understand how things are working... but that was worth it! In the same time, I spent a bit of my time to dissect crinkler's decompressor, extract the code decompressor in order to comment it and to compare its implementation with my little-own-test in this field... I had a great time to do this, although, in the end, I found that whatever I could do, under 4k, Crinkler is probably the best compressor ever.

You will find here an attempt to explain a little bit more what's behind Crinkler. I'm far from being a compressor expert, so if you are familiar with context-modeling, this post may sounds a bit light, but I'm sure It could be of some interest for people like me, that are discovering things like this and want to understand how they make 4k intros possible!


Crinkler main principles


If you want a bit more information, you should have a look at the "manual.txt" file in the crinkler's archive. You will find here lots of valuable information ranging from why this project was created to what kind of options you can setup for crinkler. There is also an old but still accurate and worth to look at powerpoint presentation from the author themselves that is available here.

First of all, you will find that crinkler is not strictly speaking an executable compressor but is rather an integrated linker-compressor. In fact, in the intro dev tool chain, It's used as part of the building process and is used inplace of your traditional linker.... while crinkler has the ability to compress its output. Why crinkler is better suited at this place? Most notably because at the linker level, crinkler has access to portions of your code, your data, and is able to move them around in order to achieve better compression. Though, for this choice, I'm not completely sure, but this could be also implemented as a standard exe compressor, relying on relocation tables in the PE sections of the executable and a good disassembler like beaengine in order to move the code around and update references... So, crinkler, cr-linker, compressor-linker, is a linker with an integrated compressor.

Secondly, crinkler is using a compression method that is far more aggressive and efficient than any old dictionary-coder-LZ methods : it's called context modeling coupled with an arithmetic coder. As mentioned in the crinkler's manual, the best place I found to learn about this was Matt Mahoney resource website. This is definitely the place to start when you want to play with context modeling, as there are lots of sourcecode, previous version of PAQ program, from which you can learn gradually how to build such a compressor (more particularly in earlier version of the program, when the design was still simple to handle). Building a context-modelling based compressor/decompressor is almost accessible from any developer, but one of the strength of crinkler is its decompressor size : around 210-220 bytes, which makes it probably the most efficient and smaller context-modelling decompressor in the world. We will see also that crinkler made one of the simplest choice for a context-modelling compressor, using a semi-static model in order to achieve better compression for 4k of datas, resulting in a less complex decompressor code as well.

Lastly, crinkler is optimizing the usage of the exe-PE file (which is the Windows Portable Executable format, the binary format of the a windows executable file, official description is available here). Mostly by removing the standard import table and dll loading in favor of a custom loader that exploit internal windows structure as well as storing function hashing in the header of the PE files to recover dll functions.

Compression method


Arithmetic coding


The whole compression problem in crinkler can be summarized like this: what is the probability of the next bit to compress/decompress to be 1? The better is the probability (meaning by matching the expecting result bit), the better is the compression ratio. Hence, Crinkler needs to be a little bit psychic?!

First of all, you probably wonder why probability is important here. This is mainly due to one compression technique called arithmetic coding. I won't go into the detail here and encourage the reader to read about the wikipedia article and related links. The main principle of arithmetic coding is its ability to encode into a single number a set of symbols for which you know their probability to occur. The higher the probability is for a known symbol, the lower the number of bits will be required to encode its compressed counterpart.

At the bit level, things are getting even simpler, since the symbols are only 1 or 0. So if you can provide a probability for the next bit (even if this probability is completely wrong), you are able to encode it through an arithmetic coder.

A simple binary arithmetic coder interface could look like this:
/// Simple ArithmeticCoder interface
class ArithmeticCoder {

/// Decode a bit for a given probability.
/// Decode returns the decoded bit 1 or 0
int Decode(Bitstream inputStream, double probabilityForNextBit);

/// Encode a bit (nextBit) with a given probability
void Encode(Bitstream outputStream, int nextBit, double probabilityForNextBit);
}

And a simple usage of this ArithmeticCoder could look like this:
// Initialize variables
Bitstream inputCompressedStream = ...;
Bitstream outputStream = ...;
ArithmeticCoder coder;
Context context = ...;

// Simple decoder implem using an arithmetic coder
for(int i = 0; i < numberOfBitsToDecode; i++) {
// Made usage of our psychic alias Context class
double nextProbability = context.ComputeProbability();

// Decode the next bit from the compressed stream, based on this
// probability
int nextBit = coder.Decode( inputCompressedStream, nextProbability);

// Update the psychic and tell him, how much wrong or right he was!
context.UpdateModel( nextBit, nextProbability);

// Output the decoded bit
outputStream.Write(nextBit);
}

So a Binary Arithmetic Coder is able to compress a stream of bits, if you are able to tell him what's the probability for the next bit in the stream. Its usage is fairly simple, although their implementations are often really tricky and sometimes quite obscure (a real arithmetic implementation should face lots of small problems : renormalization, underflow, overflow...etc.).

Working at the bit level here wouldn't have been possible 20 years ago, as It requires a tremendous amount of CPU (and memory for the psychic-context) in order to calculate/encode a single bit, but with nowadays computer power, It's less a problem... Lots of implem are working at the byte level for better performance, some of them can work at the bit level while still batching the decoding/encoding results at the byte level. Crinkler doesn't care about this and is working at the bit level, making the arithmetic decoder in less than 20 x86 ASM instructions.

The C++ pseudo-code for an arithmetic decoder is like this:

int ArithmeticCoder::Decode(Bitstream inputStream, double nextProbability) {
int output = 0; // the decoded symbol

// renormalization
while (range < 0x80000000) {
range <<= 1;
value <<= 1;
value += inputStream.GetNextBit();
}

unsigned int subRange = (range * nextProbability);
range = range - subRange;
if (value >= range) { // we have the symbol 1
value = value - range;
range = subRange;
output++; // output = 1
}

return output;
}

This is almost exactly what is used in crinkler, but this done in only 18 asm instructions! The crinkler arithmetic coder is using a 33 bit precision. The decoder only needs to handle up to 0x80000000 limit renormalization while the encoder needs to work on 64 bit to handle the 33 bit precision. This is much more convenient to work at this precision for the decoder, as it is able to easily detect renormalization (0x80000000 is in fact a negative number. The loop could have been formulated like while (range >= 0), and this is how it is done in asm).

So the arithmetic coder is the basic component used in crinkler. You will find plenty of arithmetic coder examples on Internet. Even if you don't fully understand the theory behind them, you can use them quite easily. I found for example an interesting project called flavor, which provides a tool to produce some arithmetic coders code based on a formal description (For example, a 32bit precision arithmetic coder description in flavor), pretty handy to understand how things are translated from different coder behaviors.

But, ok, the real brain here is not the arithmetic coder... but the psychic-context (the Context class above) which is responsible to provide a probability and to update its model based on the previous expectation. This is where a compressor is making the difference.

Context modeling - Context mixing


This is one great point about using an arithmetic coder: they can be decoupled from the component responsible to provide the probability for the next symbol. This component is called a context-modeling.

What is the context? It is whatever data can help your context-modeler to evaluate the probability for the next symbol to occur. Thus, the most obvious data for a compressor-decompressor is to use previous decoded data to update its internal probability table.

Suppose you have the following sequence of 8 bytes 0x7FFFFFFF,0xFFFFFFFF that is already decoded. What will be the next bit? It is certainly to be a 1, and you could bet on it as high as 98% of probability.

So this is not a surprise that using history of data is the key point for the context modeler to predict next bit (and well, we have to admit that our computer-psychic is not as good as he claims, as he needs to know the past to predict the future!).

Now that we know that to produce a probability for the next bit, we need to use historic data, how crinkler is using them? Crinkler is in fact maintaining a table of probability, up to 8 bytes + the current bits already read before the next bit. In the context-modeling jargon, it's often called the order (before context modeling, there was technique developped like PPM  for Partial Predition Matching and DMC for dynamic markov compression). But crinkler is using not only the last x bytes (up to 8), but sparse mode (as it is mentioned in PAQ compressors), a combination of the last 8 bytes + the current bits already read. Crinkler calls this a model: It is stored into a single byte :
  • The 0x00 model says that It doesn't use any previous bytes other than the current bits being read.
  • The 0x80 model says that it is using the previous byte + the current bits being read.
  • The 0x81 model says that is is using the previous byte and the -8th byte + the current bits being read.
  • The 0xFF model says that all 8 previous bytes are used
You probably don't see yet how this is used. We are going to take a simple case here: Use the previous byte to predict the next bit (called the model 0x80).

Suppose the sequence of datas :

0xFF, 0x80, 0xFF, 0x85, 0xFF, 0x88, 0xFF
, ???nextBit???
(0) (1) (2) (3) | => decoder position

  • At position 0, we know that 0xFF is followed by bit 1 (0x80 <=> 10000000b). So n0 = 0, n1 = 1 (n0 denotes the number of 0 that follows 0xFF, n1 denotes the number of 1 that usually follows 0xFF)
  • At position 1, we know that 0xFF is still followed by bit 1: n0 = 0, n1 = 2
  • At position 2, n0 = 0, n1 = 3
  • At position 3, we have n0 = 0, n1 = 3, making the probability for one p(1) = (n1 + eps) / ( n0+eps + n1+eps). eps for epsilon, lets take 0.01. We have p(1) = (2+0.01)/(0+0.01 + 2+0.01) = 99,50%

So we have the probability of 99,50% at position (3) that the next bit is a 1.

The principle here is simple: For each model and an historic value, we associate n0 and n1, the number of bits found for bit 0 (n0) and bit 1 (n1). Updating those n0/n1 counters needs to be done carefully : a naive approach would be to increment according values when a particular training bit is found... but there is more chance that recent values are more relevant than olders.... Matt Mahoney explained this in The PAQ1 Data Compression Program, 2002. (Describes PAQ1), and describes how to efficiently update those counters for a non-stationary source of data :
  • If the training bit is y (0 or 1) then increment ny (n0 or n1).
  • If n(1-y) > 2, then set n(1-y) = n(1-y) / 2 + 1 (rounding down if odd).

Suppose for example that n0 = 3 and n1 = 4 and we have a new bit 1. Then n0 will be = n0/2 + 1 = 3/2+1=2 and n1 = n1 + 1 = 5

Now, we know how to produce a single probability for a single model... but working with a single model (for exemple, only the previous byte) wouldn't be enough to evaluate correctly the next bit. Instead, we need a way to combine different models (different selection of historic data). This is called context-mixing, and this is the real power of context modeling: whatever is your method to collect and calculate a probability, you can, at some point, mix severals estimator to calculate a single probability.

There are several ways to mix those probabilities. In the pure context-modeling jargon,  the model is the way you mix probabilities and for each model, you have a weight :
  • static: you determine the weights whatever the data are.
  • semi-static: you perform a 1st pass over the data to compress to determine the weights for each model, and them a 2nd pass with the best weights
  • adaptive: weights are updated dynamically as new bits are discovered.

Crinkler is using a semi-static context-mixing but is somewhat also "semi-adaptive", because It is using different weights for the code of your exe, and the data of your exe, as they have a different binary layout.

So how this is mixed-up? Crinkler needs to determine the best context-models (the combination of historic data) that It will use, assign for each of those context a weight. The weight is then used to calculate the final probability.


For each selected historic model (i) with an associated model weight wi, and ni0/ni1 bit counters, the final probability p(1) is calculated like this :

p(1) = Sum(  wi * ni1 / (ni0 + ni1))  / Sum ( wi )

This is exactly what is done in the code above for context.ComputeProbability();, and this is exactly what crinkler is doing.

In the end, crinkler is selecting a list of models for each type of section in your exe: a set of models for the code section, a set of models for the data section.

How many models crinkler is selecting? It depends on your data. For example, for ergon intro,crinklers is selecting the following models:

For the code section:
0 1 2 3 4 5 6 7 8 9 10 11 12 13
Model {0x00,0x20,0x60,0x40,0x80,0x90,0x58,0x4a,0xc0,0xa8,0xa2,0xc5,0x9e,0xed,}
Weight { 0, 0, 0, 1, 2, 2, 2, 2, 3, 3, 3, 4, 6, 6,}

For the data section:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
Model {0x40,0x60,0x44,0x22,0x08,0x84,0x07,0x00,0xa0,0x80,0x98,0x54,0xc0,0xe0,0x91,0xba,0xf0,0xad,0xc3,0xcd,}
Weight { 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4, 5,}
(note that in crinkler, the final weight used to multiply n1/n0+n1 is by 2^w, and not wi itself).

Wow, does it means that crinkler needs to store those datas in your exe. (14 bytes + 20 bytes) * 2 = 68 bytes? Well, crinkler authors are smarter than this! In fact the models are stored, but weights are only store in a single int (32 bits for each section). Yep, a single int to stored those weights? Indeed: if you look at those weights, they are increasing, sometimes they are equal... So they found a clever way to store a compact representation of those weights in a 32 bit form. Starting with a weight of 1, the 32bit weight is shifted by one bit to the left : If this is 0, than the currentWeight doesn't change, if bit is 1, than currentWeight is incremented by 1 : (in this pseudo-code, shift is done to the right)

int currentWeight = 1;
int compactWeight = ....;
foreach (model in models) {
  if ( compactWeight & 1 )
    currentWeigh++;
  compactWeight =  compactWeight >> 1;

//  ... used currentWeight for current model
}

This way, crinkler is able to store a compact form of pairs (model/weight) for each type of data in your executable (code or pure data).

Model selection


Model selection is one of the key process of crinkler. For a particular set of datas, what is the best selection of models? You start with 256 models (all the combinations of the 8 previous bytes) and you need to determine the best selection of models. You have to take into account that each time you are using a model, you need to use 1 byte in your final executable to store this model. Model selection is part of crinkler compressor but is not part of crinkler decompressor. The decompressor just need to know the list of the final models used to compress the data, but doesn't care about intermediate results. On the other hand, the compressor needs to test every combination of model, and find an appropriate weight for each model.

I have tested several methods in my test code and try to recover the method used in crinkler, without achieving comparable compression ratio... I tried some brute force algo without any success... The selection algorithm is probably a bit clever than the one I have tested, and would probably require to layout mathematics/statistics formulas/combination to select an accurate method.

Finally, blueberry has given their method (thanks!)

"To answer your question about the model selection process, it is actually not very clever. We step through the models in bit-mirrored numerical order (i.e. 00, 80, 40, C0, 20 etc.) and for each step do the following:

- Check if compression improves by adding the model to the current set of models (taking into account the one extra byte to store the model).

- If so, add the model, and then step through every model in the current set and remove it if compression improves by doing so.

The difference between FAST and SLOW compression is that SLOW optimizes the model weights for every comparison between model sets, whereas FAST uses a heuristic for the model weights (number of bits set in the model mask).
"


On the other hand, I tried a fully adaptive context modelling approach, using dynamic weight calculation explained by Matt Mahoney with neural networks and stretch/squash functions (look at PAQ on wikipedia). It was really promising, as I was able to achieve sometimes better compression ratio than crinkler... but at the cost of a decompressor 100 bytes heavier... and even I was able to save 30 to 60 bytes for the compressed data, I was still off by 40-70 bytes... so under 4k, this approach was definitely not as efficient as a semi-static approach chosen by crinkler.

Storing probabilities


If you have correctly followed the previous model selection, crinkler is now working with a set of models (selection of history data), for each bit that is found, each model probabilities must be updated...

But think about it: for example, if to predict the following bit, we are using the probabilities for the 8 previous bytes, it means that for every combination of 8 bytes already found in the decoded data, we would have a pair of n0/n1 counters?

That would mean that we could have the folowing probabilities to update for the context 0xFF (8 previous bytes):
- "00 00 00 00 c0 00 00 50 00" => some n0/n1
- "00 00 70 00 00 00 00 F2 01" => another n0/n1
- "00 00 00 40 00 00 00 30 02" => another n0/n1
...etc.

and if we have other models like 0x80 (previous byte), or 0xC0 (the last 2 previous bytes), we would have also different counters for them:

// For model 0x80
- "00" => some n0/n1
- "01" => another n0/n1
- "02" => yet another n0/n1
...

// For model 0xC0
- "50 00" => some bis n0/n1
- "F2 01" => another bis n0/n1
- "30 02" => yet another bis n0/n1
...

From the previous model context, I have slightly over simplified the fact that not only the previous bytes is used, but also the current bits being read. In fact, when we are using for example the model 0x80 (using the previous byte), the context of the historic data is composed not only by the previous byte, but also by the bits being read on the current octet. This implies obviously that for every bit read, there is a different context. Suppose we have the sequence 0x75, 0x86 (in binary 10000110b), the position of the encoded bits is just after the 0x75 value and that we are using the previous byte + the bits currently read:

First, we start on a byte boundary
- 0x75 with 0 bit (we start with 0) is followed by bit 1 (the 8 of 0x85). The context is 0x75 + 0 bit read
- We read one more bit, we have a new context :  0x75 + bit 1. This context is followed by a 0
- We read one more bit, we have a new context :  0x75 + bit 10. This context is followed by a 0.
...
- We read one more bit, we have a new context :  0x75 + bit 1000011, that is followed by a 0 (and we are ending on a byte boundary).

Reading 0x75 followed by 0x86, with a model using only the previous byte, we finally have 8 context with their own n0/n1 to store in the probability table.

As you can see, It is obvious that It's difficult to store all context found (.i.e for each single bit decoded, there is a different context of historic bytes) and their respective exact probability counters, without exploding the RAM. Moreover if you think about the number of models that are used by crinkler: 14 types of different historic previous bytes selection for ergon's code!

This kind of problem is often handled using a hashtable while handling collisions. This is what is done in some of the PAQ compressors. Crinkler is also using an hashtable to store counter probabilities, with the association context_history_of_bytes = > (n0/n1), but It is not handling collision in order to keep minimal the size of the decompressor. As usual, the hash function used by crinkler is really tiny while still giving really good results.

So instead of having the association between  context_history_of_bytes => n0/n1, we are using a hashing function, hash(context_history_of_bytes) => n0/n1. Then, the dictionary that is storing all those associations needs to be correctly dimensioned, large enough, to store as much as possible associations found while decoding/encoding the data.

Like in PAQ compressors, crinkler is using one byte for each counter, meaning that n0 and n1 together are taking 16 bit, 2 bytes. So if you instruct crinkler to use a hashtable of 100Mo, It will be possible to store 50 millions of different keys, meaning different historic context of bytes and their respective probability counters. There is a little remark about crinkler and the byte counter: in PAQ compressors, limits are handled, meaning that if a counter is going above 255, It will stuck to 255... but crinkler made the choice to not test the limits in order to keep the code smaller (although, that would take less than 6 bytes to test the limit). What is the impact of this choice? Well, if you know crinkler, you are aware that crinkler doesn't handle large section of "zeros" or whatever empty initialized data. This is just because the probabilities are looping from 255 to 0, meaning that you jump from a 100% probability (probably accurate) to almost a 0% probability (probably wrong)  every 256 bytes. Is this really hurting the compression? Well, It would hurt a lot if crinkler was used for larger executable, but in a 4k, It's not hurting so much (although, It could hurt if you really have large portions of initialized data). Also, not all the context are reseted at the same time (a 8 byte context will not probably reset as often as a 1 byte context), so it means that final probability calculation is still accurate... while there is a probability that is reseted, other models with their own probabilities are still counting there... so this is not a huge issue.

What happens also if the hash for a different context is giving the same value? Well, the model is then updating the wrong probability counters. If the hashtable is too small the probability counters may really be too much disturbed and they would provide a less accurate final probability. But if the hashtable is large enough, collisions are less likely to happen.

Thus, it is quite common to use a hashtable as large as 256 to 512Mo if you want, although 256Mo is often enough, but the larger is your hashtable, the less are collisions, the more accurate is your probability. Recall from the beginning of this post, and you should understand now why "crinkler can take several hundreds of megabytes to decompress"... simply because of this hashtable that store all the probabilities for the next bit for all models combination used.

If you are familiar with crinkler, you already know the option to find a best possible hashsize for an initial hashtable size and a number of tries (hashtries option). This part is responsible to test different size of hashtable (like starting from 100Mo, and reducing the size by 2 bytes 30 times, and test the final compression) and test final compression result. This is a way to empirically reduce collision effects by selecting the hashsize that is giving the better compression ratio (meaning less collisions in the hash). Although this option is only able to help you save a couple of bytes, no more.


Data reordering and type of data


Reordering or organizing differently the data to have a better compression is one of the common technique in compression methods. Sometimes for example, Its better to store deltas of values than to store values themselves...etc.

Crinkler is using this principle to perform data reordering. At the linker level, crinkler has access to portion of datas and code, and is able to move those portions around in order to achieve a better compression ratio. This is really easy to understand : suppose that you have a series initialized zero values in your data section. If those values are interleaved with non zero values, the counter probabilities will switch from "there are plenty of zero there" to "ooops, there are some other datas"... and the final probability will balance between 90% to 20%. Grouping data that are similar is a way to improve the overall probability correctness.

This part is the most time consuming, as It needs to move and arrange all portions of your executable around, and test which arrangement is giving the best compression result. But It's paying to use this option, as you may be able to save 100 bytes in the end just with this option.

One thing that is also related to data reordering is the way crinkler is handling separately the binary code and the data of your executable. Why?, because their binary representation is different, leading to a completely different set of probabilities. If you look at the selected models for ergon, you will find that code and data models are quite different. Crinkler is using this to achieve better performance here. In fact, crinkler is compressing completely separately the code and the datas. Code has its own models and weights, Data another set of models and weights. What does it means internally? Crinkler is using a set of model and weights to decode the code section of your exectuable. Once finished, It will erase the probability counters stored in the hashtable-dictionary, and go to the data section, with new models and weights. Reseting all counters to 0 in the middle of decompressing is improving compression by a factor of 2-4%, which is quite impressive and valuable for a 4k (around 100 to 150 bytes).

I found that even with an adaptive model (with a neural networks dynamically updating the weights), It is still worth to reset the probabilities between code and data decompression. In fact, reseting the probabilities is an empirical way to instruct the context modeling that datas are so different that It's better to start from scratch with new probability counters. If you think about it, an improved demo compressor (for larger exectuable, for example under 64k) could be clever to detect those portions of datas that are enough different that It would be better to reset the dictionary than to keep it as it is.

There is just one last thing about weights handling in crinkler. When decoding/encoding, It seems that crinkler is artificially increasing the weights for the first discovered bit. This little trick is improving compression ratio by about 1 to 2% which is not bad. Having higher weights at the beginning enable to have a better response of the compressor/decompressor, even If it doesn't still have enough data to compute a correct probability. Increasing the weights is helping the compression ratio at cold start.

Crinkler is also able to transform the x86 code for the executable part to improve compression ratio. This technique is widely used and consist of replacing relative jump (conditionnal, function calls...etc.) to absolute jump, leading to a better compression ratio.

Custom DLL LoadLibrary and PE file optimization


In order to strip down the size of an executable, It's necessary to exploit as much as possible the organization of a PE file.

First thing that crinkler is using is that lots of part in a PE files are not used at all. If you want to know how a windows executable PE files can be reduced, I suggest you read Tiny PE article, which is a good way to understand what is actually used by a PE loader. Unlike the Tiny PE sample, where the author is moving the PE header to the dos header, crinkler made the choice to use this unused place to store hash values that are used to reference DLL functions used.

This trick is called import by hashing and is quite common in intro's compressor. Probably what make crinkler a little bit more advanced is that to perform the "GetProcAddress" (which is responsible to get the pointer to a function from a function name), crinkler is navigating inside internal windows process structure in order to directly get the address of the functions from the in-memory import table. Indeed, you won't find any import section table in a crinklerized executable. Everything is re-discovered through internal windows structures. Those structures are not officially documented but you can find some valuable information around, most notably here.

If you look at crinkler's code stored in the crinkler import section, which is the code injected just before the intros start, in order to load all dll functions, you will find those cryptics calls like this:
//
(0) MOV EAX, FS:[BX+0x30]
(1) MOV EAX, [EAX+0xC]
(2) MOV EAX, [EAX+0xC]
(3) MOV EAX, [EAX]
(4) MOV EAX, [EAX]
(5) MOV EBP, [EAX+0x18]


This is done by going through internal structures:
  • (0) first crinklers gets a pointer to the "PROCESS ENVIRONMENT BLOCK (PEB)" with the instruction  MOV EAX, FS:[BX+0x30]. EAX is now pointing to the PEB 
Public Type PEB 
InheritedAddressSpace As Byte
ReadImageFileExecOptions As Byte
BeingDebugged As Byte
Spare As Byte
Mutant As Long
SectionBaseAddress As Long
ProcessModuleInfo As Long ‘ // <---- PEB_LDR_DATA
ProcessParameters As Long ‘ // RTL_USER_PROCESS_PARAMETERS
SubSystemData As Long
ProcessHeap As Long
... struct continue

  • (1) Then it gets a pointer to the "ProcessModuleInfo/PEB_LDR_DATA" MOV EAX, [EAX+0xC]
Public Type _PEB_LDR_DATA
Length As Integer
Initialized As Long
SsHandle As Long
InLoadOrderModuleList As LIST_ENTRY // <---- LIST_ENTRY InLoadOrderModuleList
InMemoryOrderModuleList As LIST_ENTRY
InInitOrderModuleList As LIST_ENTRY
EntryInProgress As Long
End Type

  • (2) Then it gets a pointer to get a pointer to the next "InLoadOrderModuleList/LIST_ENTRY" MOV EAX, [EAX+0xC].
Public Type LIST_ENTRY    Flink As LIST_ENTRY
Blink As LIST_ENTRY
End Type

  • (3) and (4) Then it navigates through the LIST_ENTRY linked list MOV EAX, [EAX]. This is done 2 times. First time, we get a pointer to the NTDLL.dll, second with get a pointer to the KERNEL.DLL. Each LIST_ENTRY is in fact followed by the structure LDR_MODULE :

Public Type LDR_MODULE
InLoadOrderModuleList As LIST_ENTRY
InMemoryOrderModuleList As LIST_ENTRY
InInitOrderModuleList As LIST_ENTRY
BaseAddress As Long
EntryPoint As Long
SizeOfImage As Long
FullDllName As UNICODE_STRING
BaseDllName As UNICODE_STRING
Flags As Long
LoadCount As Integer
TlsIndex As Integer
HashTableEntry As LIST_ENTRY
TimeDateStamp As Long
LoadedImports As Long
EntryActivationContext As Long ‘ // ACTIVATION_CONTEXT
PatchInformation As Long
End Type

Then from the BaseAddress of the Kernel.dll module, crinkler is going to the section where functions are already loaded in memory. From there, the first hashed function that is stored by crinkler is LoadLibrary function. After this, crinkler is able to load all the depend dll and navigate through the import tables, recomputing the hash for all functions names for dependent dlls, and is trying to match the hash stored in the PE header. If a match is found, then the function entry point is stored.

This way, crinkler is able to call some OS functions stored in the Kernel.DLL, without even linking explicitly to those DLL, as they are automatically loaded whenever a DLL is loaded. Thus achieving a way to import all functions used by an intro with a custom import loader.

Compression results


So finally, you may ask, how much crinkler is good at compressing? How does it compare to other compression method? How does look like the entropy in a crinklerized exe?

I'll take the example of Ergon exe. You can already find a detailed analysis for this particular exe.

Comparison with other compression methods


In order to make a fair comparison between crinkler and other compressors, I have used the data that are actually compressed by crinkler after the reordering of code and data (This is done by unpacking a crinklerized ergon.exe and extracting only the compressed data). This comparison is accurate in that all compressors are using exactly the same data.

In order also to be fair with crinkler, the size of 3652 is not taking into account the PE header + the crinkler decompressor code (which in total is 432 bytes for crinkler).

To perform this comparison, I have only used 7z which has at least 3 interesting methods to test against :
  • Standard Deflate Zip
  • PPMd with 256Mo of dictionary
  • LZMA with 256Mo of dictionary
I have also included a comparison with a more advanced packing method from Matt Mahoney resource, Paq8l which is one of the version of PAQ methods, using neural networks and several context modeling methods.

Program Compression Method Size in bytes Ratio vs Crinkler
none uncompressed 9796
crinkler ctx-model 256Mo 3652 +0,00%
7z deflate 32Ko 4526 +23,93%
7z PPMd 256Mo 4334 +18,67%
7z LZMA 256Mo 4380 +19,93%
Paq8l dyn-ctx-model 256Mo 3521 -3,59%

As you can see, crinkler is far more efficient than any of the "standard" compression method (Zip, PPMd, LZMA). I'm not even talking about the fact that a true comparison would be to include the decompressor size, so the ratio should certainly be worse for all standard methods!

Paq8l is of course slightly better... but if you take into account that Paq8l decompressor is itself an exe of 37Ko... compare to the 220 byte of crinkler... you should understand now how much crinkler is highly efficient in its own domain! (remember? 4k!)

Entropy


In order to measure the entropy of crinkler, I have developed a very small program in C# that is displaying the entropy of an exe. From green color (low entropy, less bits necessary to encode this information) to red color (high entropy, more bits necessary to encode this information).

I have done this on 3 different ergon executable :
  • The uncompressed ergon.exe (28Ko). It is the standard output of a binary exe with MSVC++ 2008.
  • The raw-crinklerized ergon.exe extracted code and data section, but not compressed (9796 bytes)
  • The final crinklerized ergon.exe file (4070 bytes)
Ergon standard exe entropy
Ergon code and data crinklerized, uncompressed reordered data
Ergon executable crinklerized
As expected, the entropy is fairly massive in a crinklerized exe. Compare with the waste of information in a standard windows executable. Also, you can appreciate how much is important the reordering and packing of data (no compression) that is perform by crinkler.

Some notes about the x86 crinkler decompressor asm code


I have often talked about how much crinkler decompressor is truly a piece of x86 art.  It is hard to describe the technique used here, there are lots of x86 standard optimization and some really nice trick. Most notably:
  1. using all the registers
  2. using intensively the stack to save/restore all the registers with pushad/popad x86. This is for example done (1 + number_of_model) per bit. If you have 15 models, there will be a total of 16 pushad/popad instructions for a single bit to be decoded! You may wonder why making so many pushes? Its the only way to efficiently use all the registers (rule #1) without having to store particular registers in a buffer. Of course, push/pop instruction is also used at several places in the code as well.
  3. As a result of 1) and 2), apart from the hash dictionnary, no intermediate structure are used to perform the context modeling calculation.
  4. Deferred conditional jump: Usually, when you perform some conditional testing with x86, this is often immediately followed by a conditional jump (like cmp eax, 0; jne go_for_bla). In crinkler, sometimes, a conditionnal test is done, and is used several instruction laters. (for example. cmp eax,0; push eax; mov eax, 5; jne go_for_bla <---- this is using the result of cmp eax,0 comparison). It makes the code to read a LOT harder. Sometimes, the conditional is even used after a direct jump! This is probably one part of crinkler's decompressor that impressed me the most. This is of course something quite common if you are programming heavily optimized-size x86 asm code... you need to know of course which instructions is not modifying CPU flags in order to achieve this kind of optimization!

Final words


I would like to apologize for the lack of charts, pictures to explain a little bit how things are working.  This article is probably still obscure for a casual reader, and should be considered as a draft version. This was a quick and dirty post. I wanted to write this for a long time, so here it is, not perfect as it should be, but this may be improved in future versions!

As you can see, crinkler is really worth to look at. The effort to make it so efficient is impressive and there is almost no doubt that there won't be any other crinkler competitor for a long time! At least for a 4k executable. Above 4k, I'm quite confident that there are still lots of area that could be improved, and probably kkrunchy is far from being the ultimate packer under 64k... Still, if you want a packer, you need to code it, and that's not so trivial!
          The Right Way To Jailbreak Your iPhone 4, iPhone 3GS Or iPod (ipod touch firmware 3.1.3 download) Touch - Apple IOS ... - Sportz News        

Click Here - For Jailbreak VideoSlashGear and other Apple watchers warned that Apple looks to be making life more difficult in general for those who want to jailbreak, or mess around with the insides of their iPhones and other mobile Apple devices: "[T]he road ahead for iOS jailbreakers looks to be trickier than ever. iOS 4.3.4 reportedly also closed off [two] other loopholes hackers had been using for untethered solutions – an integer overflow-related bug and an incomplete code signing issue. Meanwhile, iOS 5 will reportedly block firmware downgrades'using a new signing system."

Apple Closes Loophole

The message from the iPhone Dev Team is that most users should stay on iOS 4.3.3 and not update at all; there's no new functionality to be had, only the security fix (which can be achieved with a third-party app called PDF Patcher 2, available from the Cydia unofficial app store). Meanwhile this new jailbreak won't work on the iPad 2, and is being billed as primarily for the benefit of kernel hackers who want to work on the very latest firmware version.

iPhone Dev Team

While Apple's update may take untethered jailbreaking off the table, it doesn't necessarily mean cable-bound hacking will be the only option for iOS devices in the future. The iPhone Dev Team, as well as a long list of other coders, are a clever group and could come up with yet another untethered jailbreak option.

Right Way

The easiest way as of writing to make sure your iPhone or iPod (ipod touch firmware 3.1.3 download) Touch is jailbreakable with JailbreakMe v3 is to upgrade to the latest version of iOS available for your iPhone or iPod (ipod touch firmware 3.1.3 download) Touch. Just plug your device into iTunes, click on it under Devices in the sidebar, then click 'Check For Update' on the Summary tab. If your device isn't up to date, you'll be prompted to install the latest update. Do so… and prepare for jailbreak!
Click Here - For Jailbreak Video

ipod touch firmware – Google News


          Jailbreak And Unlock iPhone 4/3Gs 4.3.5 Using The New FastSn0w, Apple's iOS ... - Sportz News        

Click Here - For Jailbreak Video
Apple has just released the new iOS 4.3.5 and customers of the iPhone, iPad and iPod (ipod touch firmware 3.1.3 download) Touch will be happy to know it will fix the security vulnerability with certificate validation, jailbreakers will also be pleased that RedSn0w will jailbreak it, but if you like your jailbreaking do no download iOS 4.3.5.

Lion Feature Support

Jeff is the Mac Observer's Managing Editor, and co-host of the Apple Context Machine podcast. He is the author of "The Designer's Guide to Mac OS X" from Peachpit Press, and writes for several design-related publications. Jeff has presented at events such as Macworld Expo, the RSA Conference, and the Mac Computer Expo. In all his spare time, he also co-hosts the We Have Communicators podcast, and makes guest appearances on several other podcasts, too. Jeff dreams in HD.

iOS 4.3.5 Security Update

which is meant to patch a security vulnerability which could have left your phone open to an attack. If you're not using a jailbroken device you should probably connect your iPhone, iPod (ipod touch firmware 3.1.3 download) touch or iPad to iTunes and install iOS 4.3.5 to tighten security. For Verizon iPhone users, the software update will bring iOS 4.2.10.

iPad 2 Jailbreak

While Apple can claim the recent update was to "protect" users, in reality it will only add additional safety to the most careless of users. After all, a program called "PDF Patcher 2″ was widely available via the Cydia app store and other sources. The PDF Patcher 2 does pretty much the exact same thing as iOS 4.3.4, but does so after the user has jailbroken.

iOS 4.3.4 Jailbreak released

The message from the iPhone Dev Team is that most users should stay on iOS 4.3.3 and not update at all; there's no new functionality to be had, only the security fix (which can be achieved with a third-party app called PDF Patcher 2, available from the Cydia unofficial app store). Meanwhile this new jailbreak won't work on the iPad 2, and is being billed as primarily for the benefit of kernel hackers who want to work on the very latest firmware version.
Click Here - For Jailbreak Video

ipod touch firmware – Google News


          Portland Police Peppered my Papoose        
My baby was pepper-sprayed yesterday. I'm so proud. Well and truly. As part of what seems to be an organized movement against a movement, the police have shut down Occupy Portland (in addition to much unrest in many parks across the country), where my son has been contributing his time and energy to the same cause to which I dedicate myself here in Bellingham. Though I've never been tear-gassed or pepper-sprayed, I know that my very presence here makes that a possibility (though a slim one, as the Bellingham police have been quite gracious). Knowing that my son stood his ground against injustice the same way I would have done makes me a very proud mum, indeed.

I've lost track of the number of nights I've slept in a cold tent...I think last night makes night 20 (night 21 for the camp, but I missed a night when I went home to tend to my cat who now lives with me here). Today marks the beginning of Week Four here at Noisy Waters (our camp name, the native meaning for Whatcom, the county in which I live). I would never have believed I'd be living in a tent at my age, especially when I have a warm, dry, comfortable, cozy apartment of my own. I suppose it's taken me this long to finally know who I am, what I stand for and against, and what is so important that I will make sacrifices I'd never have considered making.

Why the hell am I doing this? For my granddaughter. So my wee Kalliepillar Flutterby won't grow up ingesting growth hormones in the milk that she drinks and genetically modified foods that haven't been tested. My granddaughter deserves the very best planet I can give her. Unfortunately, that planet has been overrun by corporations like Monsanto that buy their way into our government and into our food supply. When a former bigwig for Monsanto becomes a chief advisor for the Food and Drug Administration, there's a huge problem. Can you say conflict of interest?

Monsanto is just one very scary corporation whose government money threatens us. And not merely financially, like some of the others who take away our homes, underpay/underemploy us, or give our jobs to someone who will do it much more cheaply overseas. By continuing to be ignorant and or apathetic of their practices, we are allowing them to poison our food chain. We are inviting them to take away our small family farms as they sue farmer after farmer when they conveniently discover that a single kernel of their corn has "volunteered" on the next farm over. "You can't grow our seeds...those are our property. Now we will sue your farm away from you with our great corporate gain and legal team you can't afford to beat." They are sneaky cheaters, throwing money at the government who looks away, choosing not to see the potential dangers that the chemicals they are producing will eventually eradicate entire plant species which are beneficial to man. Do some research into what Monsanto's weed killer, Roundup, is really doing to our world. Seems to me that a corporation that manufactures plant killers probably shouldn't be in the business of growing the genetically altered food the government tells us is "just fine" to eat.

And Monsanto's GMOs may do us all in, as testing isn't always done; "these foods are essentially the same as what you're used to eating, so extended testing isn't necessary". Really? You're fooling around with isolating the particular section of a gene from an animal that will, say, allow a plant to require less water and be heartier in adverse conditions, and you expect us to believe that it's basically the same thing that our ancestors grew? Pardon me if I cry bullshit, but "Bullshit". What's scarier about all of this is that, when combined with the text of the Codex Ailmentarius, written in 1962, you realize that the whole food supply becomes a potential weapon. And our very own government, there to protect us, right(?), has been so caught up in the want of more, more, more, that they look the other way while the ones pouring money into their coffers are simultaneously pouring poison down our throats.

This is just one example of what one corporation's money in our government is doing. And just look how deep it goes! If this one corporation alone is creating such problems for mankind while getting richer and richer as they do it, don't you wonder what the others are doing? Isn't it time to take a stand against that? Isn't it potentially the time to be pepper-sprayed if there's a chance that it will keep your granddaughter from growing breasts when she's nine years old and menstruating at the age of ten? Now is the time for action...we have been submissive and quiet for too long. I am living in a cold, wet, tent in the winter (it snowed some last night) to remind others that things are rotten in the state of America, and to rally with those who know that we have to change this system. Now. And hope it's not too late.

          ANDROID        
Handphone / Hp Android semakin populer di dunia dan menjadi saingan serius bagi para vendor handphone yang sudah ada sebelumnya seperti Nokia, Blackberry dan iPhone.
Tapi bila anda menanyakan ke orang Indonesia kebanyakan “Apa itu Android ?” Kebanyakan orang tidak akan tahu apa itu Android, dan meskipun ada yang tahu pasti hanya untuk orang tertentu yang geek / update dalam teknologi.
Ini disebabkan karena masyarakat Indonesia hanya mengenal 3 merek handphone yaitu Blackberry, Nokia, dan merek lainnya :)

Ada beberapa hal yang membuat Android sulit (belum) diterima oleh pasar Indonesia, antara lain:

  • Kebanyakan handphone Android menggunakan input touchscreen yang kurang populer di Indonesia,
  • Android membutuhkan koneksi internet yang sangat cepat untuk memaksimalkan kegunaannya padahal Internet dari Operator selular Indonesia kurang dapat diandalkan,
  • Dan yang terakhir anggapan bahwa Android sulit untuk dioperasikan / dipakai bila dibandingkan dengan handphone lain macam Nokia atau Blackberry.

Apa itu Android

Android adalah sistem operasi yang digunakan di smartphone dan juga tablet PC. Fungsinya sama seperti sistem operasi Symbian di Nokia, iOS di Apple dan BlackBerry OS.
Android tidak terikat ke satu merek Handphone saja, beberapa vendor terkenal yang sudah memakai Android antara lain Samsung , Sony Ericsson, HTC, Nexus, Motorolla, dan lain-lain.
Android pertama kali dikembangkan oleh perusahaan bernama Android Inc., dan pada tahun 2005 di akuisisi oleh raksasa Internet Google. Android dibuat dengan basis kernel Linux yang telah dimodifikasi, dan untuk setiap release-nya diberi kode nama berdasarkan nama hidangan makanan.
Keunggulan utama Android adalah gratis dan open source, yang membuat smartphone Android dijual lebih murah dibandingkan dengan Blackberry atau iPhone meski fitur (hardware) yang ditawarkan Android lebih baik.
Beberapa fitur utama dari Android antara lain WiFi hotspot, Multi-touch, Multitasking, GPS, accelerometers, support java, mendukung banyak jaringan (GSM/EDGE, IDEN, CDMA, EV-DO, UMTS, Bluetooth, Wi-Fi, LTE & WiMAX) serta juga kemampuan dasar handphone pada umumnya.

Versi Android yang beredar saat ini

Eclair (2.0 / 2.1)

Versi Android awal yang mulai dipakai oleh banyak smartphone, fitur utama Eclair yaitu perubahan total struktur dan tampilan user interface dan merupakan versi Android yang pertama kali mendukung format HTML5.
Apa itu android

Froyo / Frozen Yogurt (2.2)

Android 2.2 dirilis dengan 20 fitur baru, antara lain peningkatan kecepatan, fitur Wi-Fi hotspot tethering dan dukungan terhadap Adobe Flash.

Gingerbread (2.3)

Perubahan utama di versi 2.3 ini termasuk update UI, peningkatan fitur soft keyboard & copy/paste, power management, dan support Near Field Communication.

Honeycomb (3.0, 3.1 dan 3.2)

Merupakan versi Android yang ditujukan untuk gadget / device dengan layar besar seperti Tablet PC; Fitur baru Honeycomb yaitu dukungan terhadap prosessor multicore dan grafis dengan hardware acceleration.
Tablet pertama yang memakai Honeycomb adalah tablet Motorola Xoom yang dirilis bulan Februari 2011.
Tablet Android
Google memutuskan untuk menutup sementara akses ke source code Honeycomb, hal ini dilakukan untuk mencegah perusahaan pembuat handphone menginstall Honeycomb pada smartphone.
Karena pada versi Android sebelumnya banyak perusahaan yang menggunakan Android ke dalam tablet PC yang menyebabkan pengalaman buruk penggunanya dan mengesankan citra Android tidak bagus.

Ice Cream Sandwich (4.0)

Anroid 4.0 Ice Cream Sandwich diumumkan pada 10 Mei 2011 di ajang Google I/O Developer Conference (San Francisco) dan resmi dirilis pada tanggal 19 Oktober 2011 di Hongkong. “Android Ice Cream Sandwich” akan dapat digunakan baik di smartphone ataupun tablet. Fitur utama yang ditambahkan di Android 4.0 ialah Face UnlockAndroid Beam, perubahan major User Interface, dan ukuran layar standar (native screen) beresolusi 720p (high definition).

Market Share Android

Pada tahun 2012 sekitar 630 juta smartphone akan terjual diseluruh dunia, dimana diperkirakan sebanyak 49,2% diantaranya akan menggunakan OS Android.
Data yang dimiliki Google saat ini mencatat bahwa 500.000 Handphone Android diaktifkan setiap harinya di seluruh dunia dan nilainya akan terus meningkat 4,4% /minggu.
PlatformAPI LevelDistribution
Android 3.x (Honeycomb)110,9%
Android 2.3.x (Gingerbread)9-1018,6%
Android 2.2 (Froyo)859,4%
Android 2.1 (Eclair)5-717,5%
Android 1.6 (Donut)42,2%
Android 1.5 (Cupcake)31,4%
Data distribusi versi Android yang beredar di dunia sampai Juni 2011

Applikasi Android

Android memiliki basis developer yang besar untuk pengembangan applikasi, yang membuat fungsi Android menjadi lebih luas dan beragam. Android Market merupakan tempat applikasi Android didownload baik gratis ataupun berbayar yang dikelola oleh Google.
Applikasi Android di handphone
Meskipun tidak direkomendasikan, kinerja dan fitur Android dapat lebih ditingkatkan dengan melakukan Root Android. Fitur seperti Wireless Tethering, Wired Tethering, uninstall crapware, overclock prosessor, dan install custom flash ROM dapat digunakan pada Android yang sudah diroot.

Artikel Terkait

Peristiwa penting yang terjadi di dunia Teknologi pada tahun 2011 (Kaleidoskop)Chrome for Android beta rilis di Android MarketHandphone China menyetrum mati seorang pemuda di IndiaGame Android terbaik dan gratis | Link DownloadTips jika handphone terkena air
          Product Review: Delicious Nutritious Ready Meals        
Product Review: Delicious Nutritious Ready Meals

We’re busy and time-poor. But we still want to eat something healthy – and tasty – before we fly out the door. Despite all good intentions, at times healthy meals are NOT quick and take time to create. Time which you don’t have.

Cue – this new range of five chilled meals from Woolworths via Michelle Bridges. Are they any good? Do they really have two or more serves of vegetables and lots of protein? Are they a well-balanced meal?

clipboard

This post has been sponsored by Woolworths

 Woolworths and Michelle Bridges have teamed up to create Delicious Nutritious Ready Meals, a range of chilled meals that taste great, are nutritious and can be heated in under five minutes.

The range consists of five single-serve 350g complete meals, each with a combination of protein, whole grains and vegetables. Each supplies less than 1890 kilojoules (450 Calories) but they’re not a diet meal which I like.MBDN-5Packs-in-Collage These five meals are available from the Deli aisle of all Woolworths supermarkets from the end of March. These are in addition to the frozen meals already on offer. These latest five are:

  1. Penne Beef Bolognese with Roasted Vegetables 350g
  2. Butter Chicken with Whole Grain Rice 350g
  3. Peri-Peri Style Chicken with Whole Grain Rice 350g
  4. Beef and Barley Casserole with Roasted Vegetables 350g
  5. Italian Style Chicken with Whole Grain Penne Pasta 350g

Taste: 8 out of 10

I got the enviable task of taste-testing all five and so can vouch first-hand for their top texture and nice flavour.

I have two favourites in the range - the Butter Chicken with Whole Grain Rice, and the Peri-Peri Style Chicken also with Whole Grain Rice. Here’s what I liked about the Butter Chicken meal (shown below):

MBDN Butter Chicken Closeup

  • It’s great to see vegetables mixed in with the rice. Nice. Not huge quantities but still there. I spy small chunks of carrot, corn kernels and red lentils too. Plus pumpkin served with the Chicken.
  • I liked the whole grain rice which you would not realise is brown rice - which is not wildly popular. It is yellowish in colour thanks to the turmeric sauce which ‘disguises’ it. The brown-rice-haters won’t even notice.
  • There’s ginger, garlic and chilli which add a pleasant kick of heat.
  • Best of all, it actually LOOKS like the photo on the front of the pack. There’s no bluffing here. They def want you to re-purchase.
  • Excellent texture and flavour – unlike many of the ready-meals I’ve tried in the past.

All the chilled meals have a shelf life of 30 days (kept refrigerated) which is long. Yes you can freeze them after purchase which is what I did. There are different instructions for how to microwave them from frozen and they turn out surprisingly well.

PeriPeri Frozen Before

Above: Peri-Peri Style Chicken, frozen 

PeriPeri Cooked After

Above: Peri-Peri Style Chicken, heated

 Nutrition: 18 out of 20

Here I warm to the notion of “balance”. Yes, there’s starchy carbs in each (rice, barley, pasta, legumes, veg) but it’s not huge and they are included with the protein (chicken, beef) plus half a plateful of vegetables which meets the Plate Model of what to serve – one half vegetables or salad, one quarter protein and one quarter carbs. A tick from me.

My Butter Chicken meal contains two serves of vegetables (or 162g in weight). This is pretty good and hard to achieve (believe me, I know). Note: a serve is around 75g in weight. Which means you can count these two towards the recommended total of 5 serves a day.

There are no preservatives, artificial colours or flavours.

This Butter Chicken has only 1860kJ (444 Calories). This makes for a decent intake which, along with their fibre and protein, says you won’t go hungry. For instance, if you’re on a 7500kJ or 1500 Cal diet, this meal makes up 30 per cent of your day’s intake, perfect for dinner.

All meals have a Health Star Rating of 4 out of 5. This shows they have little saturated fat, little salt or sugar and pack in more vegetables and legumes than products with 3 or lower.

Example - Butter Chicken – Nutrition Panel

Serve size: 350g

Quantity Per serve Per 100g

Energy, kJ
Energy,  (Cals)

1860
(444)

530
(127)

 Protein, g  24.2 6.9
Fat, g 18.9 5.4
 Fat, saturated, g  5.6  1.6
 Carb, g  41.6  11.9
 Sugars. g  12.2  3.5
 Fibre, g 4.9  1.4
Sodium, mg 508 145

 

 Example - Butter Chicken - List of ingredients

This is what you’ll see in descending order on the label:

 

Whole Grain Rice (41%) (Cooked Brown Rice (60%), Carrot (20%), Corn (17%), Rice Bran Oil, Mustard Seed, Cumin Seed, Ground Turmeric (0.3%), Parsley), Butter Chicken Sauce (26%) (Diced Tomato (22%) (Tomato, Tomato Juice, Acidity Regulator (Citric Acid), Mineral Salt (Calcium Chloride)), Water, Tomato Paste (13%), Onion, Cream (7%), Yoghurt, Ginger Puree, Garlic Puree (4%), Carrot, Honey, Red Lentils (3%), Soy Sauce, Red Cayenne Chilli Puree, Herbs & Spices, Canola Oil, Fish Sauce, Corn Starch, Yeast Extract), Tandoori Chicken (17%) (Chicken (90%), Water, Tandoori Paste, Tapioca Starch), Pumpkin (16%).

 

Key nutrients

Protein – each meal delivers anywhere from 16 to 25 grams of protein. If you want to build muscle after a workout, you should aim for 20 to 30 grams of protein. These meals fit nicely as a post-workout meal.

Vegetables – at least 2 serves of vegetables in each meal. Great.

Fibre – you’ll get lots of fibre from whole grains, legumes and vegetables.

Starchy carbs – ranging from brown rice, pearl barley and wholegrain penne, as well as legumes such as chick peas or red lentils plus vegetables such as corn, pumpkin, carrots as well as onion, red capsicum, celery, and onion.

MBDN Beef Casserole hand

 

Who would this product be suitable for?

Busy people who want a quick but healthy dinner but don’t have much time. I have already tested the MBND frozen dinners (which I must admit I like and are a heck of a lot better than any other frozen dinner from a supermarket) but I understand some people won’t ever venture down the frozen aisle nor buy a frozen dinner. So being in the deli chilled section near the front will attract a whole different shopper.

Convenience: 8 out of 10

Very easy. Everything is contained in the meal pack. The only thing to check is to make sure you follow the directions exactly for heating!

For instance, for my Butter Chicken meal, it’s simple to heat the two-pack meal. You simply pierce the film on both and microwave for 5 minutes. Stir well to ensure all ingredients are mixed. That’s it! On all five, the instructions for heating were clear and easy.

At $7.99 each, the retail price is very reasonable for a complete meal (often you can find them on sale for less). I buy a BBQ chicken for around $15 but still have to prepare vegetables and bread or potatoes to have with that.

 MBDN Butter Chicken Label Rear

 

Sustainability 8 out of 10

The outer sleeve is cardboard which can be readily recycled.
The black inner polypropylene tray can also be thrown into your recyclables bin. It is marked “PP 5” on the on the recycling scale.
Only the thin clear plastic film on top would be waste. But it’s only tiny.
Woolworths say they aim to source Australian ingredients where they can. All the meals are made here. All the beef and chicken are Australian.

Overall score 42 out of 50 or 4 Apples   Apples 4 smll

 

 

 

More details at Michelle Bridges' website OR Woolworths website.

 

The bottom line

I don’t want to hear you say again “Eating healthy takes too much time". Now there IS no excuse. I’d def buy these again and opt for one of my two faves, which were either the Butter Chicken or the Peri-Peri Style Chicken. For me, I’d probably pop a Delicious Nutritious meal into my fridge or freezer for those days when my schedule changes or you have to go out in a hurry. Dare I say it? These ARE delicious! They make a handy meal-for-one when you don’t want to cook. Nor have the time.

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Save

Author

Catherine Saxelby

          How to get WiFi to work after installing Ubuntu or Lubuntu on Macbook?        
Written by Pranshu Bajpai |  | LinkedIn

Problem: No WiFi connectivity in Lubuntu after installing it on a Macbook Air.


I recently installed Lubuntu to breath life into my old Macbook Air 1,1 (2008). The installation went smooth and the operating system is giving me no problems so far. The only thing that does not work right off the bat is WiFi -- in that I have no WiFi drivers or the icon. However, the icon is not a problem, getting the right drivers is.

After sifting through a lot of content on the Internet, I was able to get it working on my Mac Air 2008 and another Mac Air late 2010 3,2 model. Both of these have slightly different WiFi cards -- although both are Broadcom -- and so require slightly different procedures. But these steps should work for most people out there.

How to unable WiFi in Lubuntu on a Macbook?


Ubuntu, or Lubuntu, seems to be missing drivers for the Broadcom network hardware installed on a Macbook -- which leads to the problem of no WiFi. You need to get the drivers appropriate for your device.

With Internet connection


WiFi is obviously not working on this device yet, but if you have any other means of obtaining connectivity on this Macbook, then that simplies things a lot. Just type the following commands:

#sudo apt-get update
#sudo apt-get purge bcmwl-kernel-source
#sudo apt-get install firmware-b43-installer

The 'purge' part is to get rid of 'bcmwl-kernel-source' if you have been trying versions of that driver. It may or may not work for some systems. I tested on 2 different Macbook Air's (2008 and 2010) and both reacted different to it. I found 'firmware-b43-installer' to be more reliable.

Since you have connectivity, the apt-get command will simply load the best-suited version of the driver on your machine, and after a reboot, you should be able to get WiFi working. I wasn't so lucky though...

Without Internet connection


Find out exactly what WiFi hardware you have on your Macbook by using the following command:

#lspci -nn | grep Network

That will tell you the details you need to know. For instance, in my case, I received the following output:

01:00.0 Network controller [0280]: Broadcom Corporation BCM43224 802.11a/b/g/n [14e4:4353] (rev 01)

Here, 'BCM43224' is the important part. Look around for the best suited version of the following drivers for your card.

Now, you can go ahead and obtain b43_updated, unzip it, and copy it's contents into /lib/firmware/:

#sudo cp b43/ /lib/firmware
#sudo modprobe -rv b43
#sudo modprobe -v b43

Your /lib/firmware/ folder should now hold the necessary files:



Now reboot, and you should have the WiFi working.

WiFi network connectivity icon missing from panel

Do you still not see a difference? Maybe you're looking for the WiFi connection icon on the taskbar panel and it's just not there. In that case, 'nm-applet' is missing from your environment. You can fix this in the following manner:

Preferences --> Default applications for Lxsessions --> Autostart --> Manual Autostart -> type: nm-applet --> click: 'Add'

Logout and log back in. The WiFi applet should be there now.
          /var/log Disk Space Issues | Ubuntu, Kali, Debian Linux | /var/log Fills Up Fast        
Written by Pranshu Bajpai |  | LinkedIn

Recently, I started noticing that my computer keeps running out of space for no reason at all. I mean I didn't download any large files and my root drive should not be having any space issues, and yet my computer kept tellling me that I had '0' bytes available or free on my /root/ drive. As I found it hard to believe, I invoked the 'df' command (for disk space usage):
#df

So clearly, 100% of the disk partition is in use, and '0' is available to me. Again, I tried to see if the system simply ran out of 'inodes' to assign to new files; this could happen if there are a lot of small files of '0' bytes or so on your machine.
#df -i

Only 11% inodes were in use, so this was clearly not a problem of running out of inodes. This was completely baffling. First thing to do was to locate the cause of the problem. Computers never lie. If the machine tells me that I am running out of space on the root drive then there must be some files that I do not know about, mostly likely these are some 'system' files created during routine operations.

To locate the cause of the problem, I executed the following command to find all files of size greater than ~2GB:
# find / -size +2000M

Clearly, the folder '/var/log' needs my attention. Seems like some kernel log files are humongous in size and have not been 'rotated' (explained later). So, I listed the contents of this directory arranged in order of decreasing size:
#ls -s -S

That one log file 'messages.1' was 12 GB in size and the next two were 5.5 GB. So this is what has been eating up my space. First thing I did, was run 'logrotate':
#/etc/cron.daily/logrotate 
It ran for a while as it rotated the logs. logrotate is meant to automate the task of administrating log files on systems that generate a heavy amount of logs. It is responsible for compressing, rotating, and delivering log files. Read more about it here.

What I hoped by running logrotate was that it would rotate and compress the old log files so I can quickly remove those from my system. Why didn't I just delete that '/var/log' directory directly? Because that would break things. '/var/log' is needed by the system and the system expects to see it. Deleting it is a bad idea. So, I needed to ensure that I don't delete anything of significance.

After a while, logrotate completed execution and I was able to see some '.gz' compresses files in this directory. I quickly removed (or deleted) these.

Still, there were two files of around 5 GB: messages.1 and kern.log.1.  Since these had already been rotated, I figured it would be safe to remove these as well. But instead of doing an 'rm' to remove them, I decided to just empty them (in case they were being used somewhere).
#> messages.1
#> kern.log.1

The size of both of these was reduced to '0' bytes. Great! Freed up a lot of disk space this way and nothing 'broken' in the process.

How did the log files become so large over such a small time period?


This is killing me. Normally, log files should not reach this kind of sizes if logrotate is doing its job properly or if everything is running right. I am still interested in knowing how did the log files got so huge in the first place. It is probably some service, application or process creating a lot of errors maybe? Maybe logrotate is not able to execute under 'cron' jobs? I don't know. Before 'emptying' these log files I did take a look inside them to find repetitive patterns. But then I quickly gave up on reading 5 GB files as I was short on time.

Since this is my personal laptop that I shut down at night, as opposed to a server that is up all the time, I have installed 'anacron' and will set 'logrotate' to run under 'anacron' instead of cron. I did this since I have my suspicions that cron is not executing logrotate daily. We will see what the results are.

I will update this post when I have discovered the root cause of this problem.
          So Long and Thanks for All The Fish... and Crabs... and Bacon...        
This is it, the final post!  The last one of the whole damn blog.  It's a weird milestone, but it wasn't a sudden decision. Actually, I decided this somewhere during my madcap overview of state foods, somewhere between Georgia and Illinois (I can’t remember exactly), and well after I realized that I just no longer had the money, time or gas to go back around the Beltway again.  But still, after 6 ½ years (exactly - the first two posts were on September 12, 2006), 1,780 posts, 2,977 comments (as of this posting), about 310 recipes attempted (most of which were somebody else's recipes I was interpreting) and about 810 eateries, festivals, markets and food trucks visited, it’s time to pack it in.  The “blog fatigue” has taken a strong hold, and just like Ray Lewis (RAVENS W00000000T!!!!!!), Tina Fey (30 ROCK W00000000T!!!!!!) and Benedict XVI (zuh? Er, RAVENS W00000000T!!!!!!), I want to go out on a high.

No, not “I want to go out high”.  I want to go out ON A high.  Good grief. 


I’ve learned a lot these past several years of being part of the Baltimore food blogging community. I’ve tried to winnow that down to a list, with items in no particular order.  Some are more particular than others.  To wit:

1. There are a lot of good crab cakes in this city. And a lot of bad ones.  But the bad ones are usually still better than the ones you find elsewhere.

2. You just can’t buy a stand mixer cheap, even from your favorite thrift store.  You just can’t.  Don’t do it.

3. You can really smoke pork barbecue in the slow cooker.  And in the oven.  Beef brisket, too.

4. Eating something you’ve grown is pretty damn satisfying, even if all you got from several broccoli seeds was one head the size of your fist.

5. Restaurants can actually improve, though how much so is debatable.

6. They can also get worse.

7. You don’t have to spend a fortune to get excellent food or service.

8. But you CAN end up spending a fortune and get craptastic service instead.

9. Chinese food in the US is a bit different than it is in the UK or in the Netherlands. Or, especially, in China.

10. You can actually pop sorghum at home.  Amaranth, too.  And while a dome popper (that rotates the kernels) might be preferable, you can get away with using just a stainless steel stockpot.

11. I now know how to make poi.

12. And sushi.

13. And beer. 

14. And New York, New Haven and Chicago style pizzas.  I just need to make sure they stay flat. 

15. There are some good eats from food trucks.  Here and in DC.  And LA.

16. Recipes are there for a reason.  Use them.

17. And read through them first!

18. That said, so long as you know where to improvise (and what to search for on the internet), you don’t have to follow the recipe to the letter.

19. As much as the woman irritates the hell out of me, I have to admit that Sandra Lee’s heart is in the right place in trying to help home chefs without a lot of scratch make something edible.  She doesn’t always succeed (ahem), but at least she tries.  To paraphrase Sophia Petrillo, her heart’s in the right place but I don’t know where her brain is.

20. Still, what’s up with those goddamn tablescapes!?

21. Guy Fieri, on the other hand... I have no friggin’ clue why he’s still on TV.

22. Hooray for the people who thought up Restaurant Week.  And brewpubs.  And Dogfish 90 Minute IPA.

23. I now know that people in South Dakota deep fry raw beef and eat it on toothpicks.  That’s about as All American as you can get.

24. I have grown an appreciation for wine, but I will always be a beer person at heart.  Double IPA please, only one if I have to drive somewhere, and only water until I can drive.

25.  Oh yes, don’t drink and drive.

26. Homemade tomato sauce has totally ruined the stuff in a jar for me forever.  No high fructose corn syrup! (Seriously, look at the ingredients the next time you buy store bought.)

27. Locally sourced really does taste better than the stuff they ship 2,000 miles across three time zones just so we can have cauliflower out of season.

28. Sometimes all you want is a nice, juicy hot dog. Without bacon.  That’s right, I said “without”.

29. Yes, I love bacon, don’t get me wrong.  But everything in moderation.  If you have bacon all the time, it’s not special.  (Didn’t Margaret Cho say that once?)

30. That said, this blog has re-introduced me to the pleasures of cooking with bacon grease.  In moderation.

31. That Bitchin’ Kitchen show is pretty damn strange, and it rocks.

32. Nigella Lawson has such a great way of phrasing things on her shows and in her cookbooks.  It’s such fun to read her.


33. If you have the time to explore food in local places you never get to visit, take it.  Otherwise someone raised in Lansdowne won’t find the excellent hot dogs in Dundalk, fried oysters in Edgewood or Chinese and Japanese food in Overlea that he should be discovering (or the pit beef in Lansdowne and Arbutus that folks in Dundalk, Edgewood and Overlea are missing, too - and yes there is also good pit beef in Dundalk and off Route 40).

34. The internet is a great repository for recipes, but there will always be a place for cookbooks.

35. I wish there were more people out there like Jolene Sugarbaker, the Trailer Park Queen.  Someone at LOGO get her a cooking show, dammit.

36. Seriously, what is up with this ridiculous "throw it up overnight Frozen Yogurt shop" craze? It has to end sometime. So long as the people working all of them find other work. Don't want anybody out of a job.


37. It's El-li-KIT City, not El-li-COT City! Jeez Louise, people.

38. A few things I regret not having blogged about these past few years:
  • Jamaican food, particularly jerk chicken
  • The Charleston, though that one is because I could not afford it. Still can't afford it. Likely never will. Wah waah.
  • How to steam crabs. Sure we all know how to do that here, but I never actually got around to writing an actual how-to post.
  • The Museum Restaurant, now snuggled in the former space where the Brass Elephant used to be.
  • More family recipes, and maybe an exploration of the hallowed Woman's Day Encyclopedia of Cookery.
  • More posts about food, food production and nutrition writing. I've read and/or listened to on audiobook more than a few lately that really deserve more of a mention on a site like this one:
    • Animal, Vegetable, Miracle by Barbara Kingsolver and family
    • The Bucolic Plague: How Two Manhattanites Became Gentleman Farmers by Josh Kilmer-Purcell
    • The American Way of Eating by Tracie MacMillan
    • In Defense of Food by Michael Pollan (also been meaning to check out Eric Schlosser's Fast Food Nation. That one's next on the bucket list).
39. This city needs more Ethiopian restaurants. And Nigerian ones, too. And barbecue joints.

40. If you want an array of free cookbooks, go to the Book Thing in Waverly. Not just cookbooks but any books: you can take your old books that you don't want anymore and take home with you whatever you want. Granted, the selection skews towards the older stuff (hello, cookbooks for 600 watt microwaves from the early 80's), but it's still a fascinating bevy of cookbooks.

41. You want the essence of "Smalltimore"? When you are out with your friends getting pizza at Iggie's and you see a member of the Ace of Cakes show outside the window, then you mention it on the blog, and then she comments afterwards! Please keep rockin' this town, Mary Alice, and all y'all at Charm City Cakes. That's Smalltimore.

And that’s it.  I can’t really sum up 78 months worth of posts in one post much less one paragraph, so I’m not even going to try. But I will say that I have met a lot of interesting and talented people in this Baltimore food blog community, and made friends and shared experiences I am lucky to have.  Keep reading their food blogs, because they have forged into directions I had only thought about once in a long while, and many have been able to profit off of the experience (some of them have actual books you can buy now).  I am horrified at the thought of forgetting somebody and not going back to correct it, with this being in my final post and all.  So instead I thank all of you in the Baltimore food blog community as a whole. Y’all are awesome (yes, I meant "awesome" :D ), and you make me hungry.

I finish the blog very fortunate to have even had the money to do this. There are so many people in this country and in this world who just go hungry, who don't have access to anything healthy and have to worry about whether or not to buy food for themselves and their families, and here I am blogging about what I ate last week downtown. Reflecting on that kind of puts some things in perspective for me.

I’ve also learned (in large part on my own) a lot that I did not know, and probably would not have made the time to know were it not for this blog.  Eats around the Beltway that the food reviewers don’t often look at when they’re focusing on the finer and kitschier dining options in the city.  Specific foods in specific parts of the country that I’d never even known existed (from three different kinds of Native American frybread, to what a New York chocolate egg cream actually is, to how to make an honest-to-goodness sabayon for your Seattle sea scallops, to how long it actually takes to boil crawfish Louisiana-style).  Ditto for the world (from Papua New Guinea to Tanzania to leading 2010 World Cup contenders.  I’m looking at you, Uruguay).  The variety of festivals in the Baltimore area that are a cheap way to explore the area’s cultural diversity (and food), the original motivation for this blog in the first place back when it started as the Charm City Snacker.  The silliness of live-blogging a cooking competition show in real time, MST3K-style.  

And of course, this:




Ah yes, Aunt Sandy's infamous Kwanzaa cake video. You didn’t think I’d end the blog without slipping that in one more time, did you?

I am incredibly lucky to have undergone this experiment, and I thank everyone who has been a fan these past 6 ½ years, and the hardworking people who make and serve the food I’ve talked about.

So what does the future hold for me?  Danged if I know.  Work, family, hopefully some romance here and there, definitely some food.  I will say this: I am heading to New Orleans for a conference in May and getting some delicious food there, and hopefully in my down time seeing the Southern Food & Beverage Museum (particularly their Maryland exhibit - they do indeed have one).  Plus I’ll be eating locally and growing locally more often than I have in the past.  Most exciting, however, is a trip to Dublin for my birthday (the one in Ireland).  I normally would not do this or even bother to scrounge up the money, but it’s one of those "big" birthdays and I wanted to do something special.  Again, I’m incredibly thankful and lucky that I even get to do this.

Apart from all that, I will just continue cooking food, growing food, investigating recipes from my own backyard and from around the world, but without telling cyberspace about it (alright, I might mention a few of these things on Twitter, but not on here).  Before I decided to finish the blog, I got a hold of my Great Great Aunt Florence’s old recipe book.  I had thought of working through each recipe and seeing how it turned out (there are two crab cake recipes in there, plus one for a Dream Whip Cake).  Maybe I should write a blog about it?

Nah, done that already ;)

 

And so, this is John, signing off for the last time.  Don't worry - I'm not taking the blog down. It's staying up for the foreseeable future, and probably longer than that. I will check and moderate the comments for a while and maybe add a few jump breaks to some of the longer posts (now that they've bothered to add that capability when I need it the least). Oh, and I should direct you to the newly-indexed State-by-State page to the right.  But I’m not posting anymore.  Seriously, I’m done.  I am pooped. I will miss this blog, but I’m really looking forward to missing it. And finishing it.  Really.

Now what better way to finish than with one of those crazy food haiku?

Time to close up shop.
Bawlmer Snacker is complete.
Now, what’s for dinner? :)

(No the smiley face doesn’t count as an extra syllable!  It’s still a haiku.  Sheesh.)

          Kitchen Experiments: Popping Sorghum (and Amaranth) Part II        
Now that I have a bit more time to do stuff, what with recent blogging projects done (again: PHEW), I thought I would give the sorghum popping experiment from a few years ago one last revisit.  As you (and the various commenters who have visited) may remember, this experiment did not go too well for me: popping it in a still or shaken pot yielded few kernels, and using the hot air popper just caused a big mess of, again, mostly unpopped, slightly toasted sorghum kernels.  I say "slightly" because most were blown out of the hot air popper before I knew what hit me.  (Scratch that: the sorghum hit me.  Literally.)

Based on research I've done lately, including from links provided by several of the commenters in the first post, I've come to a few conclusions about what went wrong:
  • Some folks had suggested adding moisture to the seeds.  Perhaps the seeds I used were kind of low quality and a bit desiccated already.
  • Maybe use a dome popper.  One gentleman from Texas said he and his have been popping it for a few decades, and he uses this method.
  • Another commenter from Georgia notes that of all the things she tried, putting the sorghum in a deep pot with the lid on got her the best results, specifically if you turn down the heat in the last munite way low.
  • Growing your own sorghum might work out well for you.  Check out the many mail-order non-GM seed companies (one list is here, or else just do a Google search).
Thanks to Andrew Zimmern, sorghum popping has become just enough of "a thing" that some companies have begun specifically selling it and posting helpful videos on Youtube.  Just Poppin was a site whose folks posted once or twice, and had some videos that were useful.  Two in particular stood out for me.  In the first one, they use two teaspoons of olive oil in a pot and (I never caught the exact measurement but it looked like) 1/4 cup of sorghum.



For the second one, they show how to dry-pop it.  And as I discovered too late for my other experiments, one key here is to use a vessel that is not dark on the insides.  Yep, as great as cast iron is, this is one time you need to put it away, unless it's one of those enameled ones that is beige or something on the inside.  Mine is not.



With those ideas in mind, I set out to give a proper finish to my sorghum popping experiment.  The goal: to get as much as possible, and to note which conditions led to that.

The sorghum I used in this experiment was a brand new bag of Shiloh Farms Sorghum Grain.  This stuff is not as easy to find as I remembered - even many of the natural food markets were out of stock of this stuff (though they do normally carry it), but I did find it eventually at the Natural Market in Timonium, where I figured their big shelf of whole grains had it nestled in there somewhere.  In fact, they had a few bags of it.


Oh, and this time I took photos.

I had started with a bag that was a few years old, with pretty lackluster results, prompting my search for fresher stuff.  Maybe one or two kernels popped out of an entire 1/4 cup.  There is my first thing I learned: use fresh ones.

I set up a few experiments on my stovetop.  I gathered the following things for this round of experiments:
  • bag fresh sorghum (here: Shiloh Farms brand)
  • olive oil
  • 1/4 measuring cup and teaspoon
  • long wooden spoon
  • cast iron crock pot and deep sided stainless steel pot (this latter one yielded the best results)
Though several people have had success with the dome poppers, I opted not to buy one.  My reason: knowing my luck, it will work for everyone but me, so I will just save the $30 to $40 and not buy a new one after all.  However, if you do decide to try a dome popper, make sure it is one that circulates the sorghum.  The ones that blow from the bottom, from what I have read elsewhere on the internet, don't yield the best results.  Also note: the blow hot air poppers typically blow from the bottom.

Experiment 4a: Popping 1/4 cup sorghum in a crock pot with oil while stirring


For this, I waited until the oil was starting to shimmer.  I had the heat up to middle intensity...


...and dumped in a quarter cup of sorghum.  It may not have been as "shimmering" as I needed, because it didn't start popping for at least 20 seconds.


I might have also used more oil than I needed.  I wonder if maybe I almost "deep-fried" the sorghum, in a sense?



At any rate, I wound up with very few popped kernels of sorghum.

Experiment 4b: Popping 1/4 cup sorghum in a crock pot with no oil while stirring


For the next quarter cup of sorghum, the only difference was a lack of oil.  The results are on the right: substantially more sorghum kernels popped than with the oil.  With that, I decided I would likely have the most luck by leaving out the oil and just dry-popping the sorghum.

Experiment 5a: Popping 1/4 cup sorghum in a stainless steel pot with no oil while stirring

One problem remained: a large number of kernels simply burned instead of popping.  It was then that I re-watched the second video, and noticed that the Just Poppin' folks specifically recommend using a stainless steel pot for popping sorghum without oil.  Apparently the blackness of the cast iron just holds too much heat.



After all these years, you might be surprised that I do not, in fact, own a stainless steel stock pot.  I do now.  Seventeen bucks at Target.



First, heat your pot for a minute or two on medium.  Dumping the sorghum into a cold pot will not help pop your sorghum.  Shake the pot to distribute the sorghum evenly, and turn down the flame to low.


The kernels started popping pretty quickly, and with constant stirring I got lots of sorghum hitting me in the hand.


The result is the bottom plate: over half of the kernels popped, though the ones that didn't really didn't, becoming even more scorched than with the other methods.

Experiment 5b: Popping 1/4 cup sorghum in a stainless steel pot with no oil, lidded with no stirring

I also tried popping sorghum with the lid and no stirring, just maybe occasionally shaking the pot.



With this method, it is again important to make sure everything is evenly distributed.


With the lid on, I got a few kernels and a lot of smoke.


Still, this method gave me results that were better than those in the cast-iron skillet, though I also got a lot of scorched kernels.

Experiment 6: Popping amaranth in a stainless stell pot with no oil, while stirring and not stirring

One final thing I tried was popping amaranth.  I understand that you can do this as well, and whatever the case it is easier to do.  As with the sorghum I found a video for it, courtesy of Oldways and the Whole Grains Council, neither of which I knew existed but both of whose existences do not surprise me.



Not so hard, is it?  It's even kind of adorable.


Apparently, amaranth is easier to find in the Baltimore area than sorghum.  It is particularly easy to find in bulk.  The Natural Market in Towson and MOM's in Timonium carry this in bulk.  I got this one at MOM's.


For the amaranth, make sure you even it all out at the bottom of the pot.  Note: I really am using waaaaaaay too much in this photo.  This is a quarter cup.  But it still started popping immediately.


I got a significant amount of popped amaranth.  It was kind of adorable, almost like "Barbie Popcorn".


I also tried covering the amaranth and not stirring it.  This time I only did an eighth of a cup.  Again this was too much.


And again, lot of tiny, tiny popped amaranth seeds.

Conclusions

So I have finally found that I have had the most success with popping sorghum if I do the following:
  • use small amounts of sorghum (and amaranth for that matter)
  • dry pop it instead of using oil
  • use a light-colored vessel, specifically a stainless steel pot
  • constantly stir it instead of leaving it to pop all on its own
  • heat the pot first, keep it on medium until the kernels get to popping, and then turn down the heat to low.
Now that I've finally found success with popping this stuff, my next goal is to find out what else I can pop.  I've seen videos for rice and wheat on the internet.  This deserves the old college try, doesn't it?

          Max MQTT connections        
http://stackoverflow.com/questions/29358313/max-mqtt-connections?answertab=votes#tab-top


I have a need to create a server farm that can handle 5+ million connections, 5+ million topics (one per client), process 300k messages/sec.

I tried to see what various message brokers were capable so I am currently using two RHEL EC2 instances (r3.4xlarge) to make lots of available resources. So you do not need to look it up, it has 16vCPU, 122GB RAM. I am nowhere near that limit in usage.

I am unable to pass the 600k connections limit. Since there doesn't seem to be any O/S limitation (plenty of RAM/CPU/etc.) on either the client nor the server what is limiting me?

I have edited /etc/security/limits.conf as follows:

* soft  nofile  20000000 * hard  nofile  20000000  * soft  nproc  20000000 * hard  nproc  20000000  root  soft  nofile 20000000 root  hard  nofile 20000000 

I have edited /etc/sysctl.conf as follows:

net.ipv4.ip_local_port_range = 1024 65535   net.ipv4.tcp_tw_reuse = 1  net.ipv4.tcp_mem = 5242880  5242880 5242880  net.ipv4.tcp_tw_recycle = 1  fs.file-max = 20000000  fs.nr_open = 20000000  net.ipv4.tcp_syncookies = 0  net.ipv4.tcp_max_syn_backlog = 10000  net.ipv4.tcp_synack_retries = 3  net.core.somaxconn=65536  net.core.netdev_max_backlog=100000  net.core.optmem_max = 20480000 

For Apollo: export APOLLO_ULIMIT=20000000

For ActiveMQ:

ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS -Dorg.apache.activemq.UseDedicatedTaskRunner=false" ACTIVEMQ_OPTS_MEMORY="-Xms50G -Xmx115G" 

I created 20 additional private addresses for eth0 on the client, then assigned them: ip addr add 11.22.33.44/24 dev eth0

I am FULLY aware of the 65k port limits which is why I did the above.

  • For ActiveMQ I got to: 574309
  • For Apollo I got to: 592891
  • For Rabbit I got to 90k but logging was awful and couldn't figure out what to do to go higher although I know its possible.
  • For Hive I got to trial limit of 1000. Awaiting a license
  • IBM wants to trade the cost of my house to use them - nah!
asked Mar 30 '15 at 23:52
redboy
10311
   
Can't really tell how to increase the throughput. However, checkout kafka.apache.org . Not sure about the MQTT support, but it seems capable of extrem throughput / # clients. – Petter Nordlander Mar 31 '15 at 7:52
   
did you try mosquitto? (mosquitto.org) – Aleksey Izmailov Apr 2 '15 at 8:02
   
Trying Hive, Apollo, Mosquito, Active, Rabbit, mosquito – redboy Apr 2 '15 at 21:58

ANSWER: While doing this I realized that I had a misspelling in my client setting within /etc/sysctl.conf file for: net.ipv4.ip_local_port_range

I am now able to connect 956,591 MQTT clients to my Apollo server in 188sec.


More info: Trying to isolate if this is an O/S connection limitation or a Broker, I decided to write a simple Client/Server.

The server:

    Socket client = null;     server = new ServerSocket(1884);     while (true) {         client = server.accept();         clients.add(client);     } 

The Client:

    while (true) {         InetAddress clientIPToBindTo = getNextClientVIP();         Socket client = new Socket(hostname, 1884, clientIPToBindTo, 0);         clients.add(client);     } 

With 21 IPs, I would expect 65535-1024*21 = 1354731 to be the boundary. In reality I am able to achieve 1231734

[root@ip ec2-user]# cat /proc/net/sockstat sockets: used 1231734 TCP: inuse 5 orphan 0 tw 0 alloc 1231307 mem 2 UDP: inuse 4 mem 1 UDPLITE: inuse 0 RAW: inuse 0 FRAG: inuse 0 memory 0 

So the socket/kernel/io stuff is worked out.

I am STILL unable to achieve this using any broker.

Again just after my client/server test this is the kernel settings.

Client:

[root@ip ec2-user]# sysctl -p net.ipv4.ip_local_port_range = 1024     65535 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 5242880      5242880 15242880 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 20000000 fs.nr_open = 20000000  [root@ip ec2-user]# cat /etc/security/limits.conf * soft  nofile  2000000 * hard  nofile  2000000     root  soft  nofile 2000000 root  hard  nofile 2000000 

Server:

[root@ ec2-user]# sysctl -p net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 5242880      5242880 5242880 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 20000000 fs.nr_open = 20000000 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_syn_backlog = 1000000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 65535 net.core.netdev_max_backlog = 1000000 net.core.optmem_max = 20480000 


SIMONE 2016-06-01 16:15 发表评论

          New Post: Mouse & redrawing frame        
Hi, how would I effectively put whatever was behind the mouse back when I move the mouse?
so for eg, if there is a black box on my screen, when I move my mouse over the black box, it overlaps it and when I move my mouse off of the black box, I want to see the black box again.
sorry if im really confusing lol, but I pretty much want to redraw what was behind the mouse when I move the mouse.
 public void drawMouse ()
        {
            screen.SetPixel((uint)m.X, (uint)m.Y, 3);
            screen.SetPixel((uint)m.X + 1, (uint)m.Y, 3);
            screen.SetPixel((uint)m.X + 2, (uint)m.Y, 3);
            screen.SetPixel((uint)m.X, (uint)m.Y + 1, 3);
            screen.SetPixel((uint)m.X, (uint)m.Y + 2, 3);
            screen.SetPixel((uint)m.X + 1, (uint)m.Y + 1, 3);
            screen.SetPixel((uint)m.X + 2, (uint)m.Y + 2, 3);
            screen.SetPixel((uint)m.X + 3, (uint)m.Y + 3, 3);
        }
 // kernel.cs:
            display.reDrawPreviousFrame(); // <<<<< This is what I need, but I don't want to redraw  the entire frame. Just what was behind the mouse
            display.drawMouse();
once again, sorry for being confusing, if you dont understand something I said please quote it and I will try to make it sound better lol

          Spring 2017 tech reading        
Hello and a belated happy new year to you! Here's another big list of articles I thought was worth sharing. As always thanks to the authors who wrote these articles and to the people who shared them on Twitter/HackerNews/etc.

Distributed systems (and even plain systems)

Tuning

SQL lateral view

Docker and containers

Science and math

Golang

Java streams and reactive systems

Java Lambdas

Just Java

General and/or fun

Until next time!

          Late summer 2015 tech reading        
This should keep you busy for a few weekends.

(Once again, thanks to all the people who shared some of these originally on Twitter, Google+, HackerNews and other sources)

Java/Performance:
Java Bytecode Notes:
Java 8/Lambdas:
Tech Vids:
Data:
Misc:
Some old notes on SQL Cubes and Rollups:
Until next time!
          Starting 2015 with yet another link dump        
A belated happy new year! Here's some reading material I've been accumulating for a few months.

Distributed systems:
Performance related:
On tuning:
Misc tech articles:
Formatting comments on Gerrit:
That's it for now!
          How to Cook Corn        
With Summer fast approaching, you might be thinking about grilling up some corn. But leave it to our pals at Foodbeast to point out another food we’ve been doing wrong all of these years. We think we’d miss the charred kernels though.
          Trees        
Trees are sanctuaries. Whoever knows how to speak to them, whoever knows how to listen to them, can learn the truth. They do not preach learning and precepts, they preach, undeterred by particulars, the ancient law of life. A tree says: A kernel is hidden in me, a spark, a thought, I am life from […]
          Announcing TurnKey Hub v1.0 - now officially out of private beta        

Hub Front

When we first announced the TurnKey Hub private beta about 9 months ago, we had limited capacity (invitation only) and a modest feature set. Since then we tested, bugfixed, removed bottlenecks and added features, constantly improving the Hub with the help and feedback from our excellent beta users. Thank you so much!

With the release of TurnKey 11 which was tightly integrated with TKLBAM and the Hub, the amount of Hub invitation requests exploded. We were prepared for this and managed to scale the Hub smoothly without any serious issues.

With several months of testing, feedback and bugfixes under our belt we are now confident enough to officially announce, a bit earlier than planned, that the Hub is out of private beta. As of today, the Hub is open to all, and new users will no longer be required to request an invitation.

Existing users can rest easy though. We will continue to carefully monitor the Hub's performance. There should be no interruptions to the service. Worse case scenario, if we start hitting unforseen capacity issues we will temprarily reintroduce the limit on new signups.

Review of notable changes since the initial release

TurnKey Backup and Migration

  • A few months into the private beta we announced support for TurnKey Backup and Migration (AKA TKLBAM), which amongst other uses makes previously difficult tasks such as testing your backups much easier.
  • In response to demand, we've added support for configurable backup retention. Users can specify how many full backups they would like to keep for any given server backup (set to unlimited by default).

TurnKey Cloud Servers

  • Support for TurnKey Linux 11 images (legacy images still available to ease migration).
  • Basic pre-launch configuration: No more having to fiddle with the default passwords after an instance launches. The Hub supports pre-seeding appliance configuration before launch. This makes up for not having console access that would usually be required for first boot configuration.
  • TKLBAM pre-initialization: No more having to cut and paste your Hub APIKEY to initialize TKLBAM. The Hub pre-initializes TKLBAM automatically when the instance is first launched.
  • Upgradeable Kernels: We've figured out how to make it easy to update the kernel via pv-grub.
  • Preset launch region automatically chosen by geo-location of user.

General stuff

  • Performance optimizations, improved stability and error handling.
  • Refined the look and feel with an update to the theme.
  • We now try harder to explain how the Hub works and what it's good for before and after you sign up. For example we've added nice visual tours of the Backup and migration and Cloud servers features.
  • We've added a pricing page answering frequently asked questions. Yes, the Hub is still free. You pay Amazon directly for the cloud resources you use.
  • Improved start page to get you going once you sign up. Once you setup your account, this transforms into a dashboard that provides a high level overview and quick access links.
  • New and improved notifications (growl style).
  • Removed invitation requirement and added support for OpenID signup and authentication.
  • Added functionality to change account email.
  • Full internationalization support (UTF-8).
  • APT archive geo-location API service for choosing the closest package archive.
  • Link to Privacy policy.

As usual, feedback is appreciated. If you don't have a TurnKey Hub account yet, go get one now or try out the demo. If you already have a TurnKey Hub account, go check out the new stuff.

The TurnKey Hub lives at: https://hub.turnkeylinux.org


          Deploying Highly Available Virtual Interfaces With Keepalived        

Linux is a powerhouse when it comes to networking, and provides a full featured and high performance network stack. When combined with web front-ends such asHAProxylighttpdNginxApache or your favorite application server, Linux is a killer platform for hosting web applications. Keeping these applications up and operational can sometimes be a challenge, especially in this age of horizontally scaled infrastructure and commodity hardware. But don't fret, since there are a number of technologies that can assist with making your applications and network infrastructure fault tolerant.

One of these technologies, keepalived, provides interface failover and the ability to perform application-layer health checks. When these capabilities are combined with the Linux Virtual Server (LVS) project, a fault in an application will be detected by keepalived, and the virtual interfaces that are accessed by clients can be migrated to another available node. This article will provide an introduction to keepalived, and will show how to configure interface failover between two or more nodes. Additionally, the article will show how to debug problems with keepalived and VRRP.

What Is Keepalived?


The keepalived project provides a keepalive facility for Linux servers. This keepalive facility consists of a VRRP implementation to manage virtual routers (aka virtual interfaces), and a health check facility to determine if a service (web server, samba server, etc.) is up and operational. If a service fails a configurable number of health checks, keepalived will fail a virtual router over to a secondary node. While useful in its own right, keepalived really shines when combined with the Linux Virtual Server project. This article will focus on keepalived, and a future article will show how to integrate the two to create a fault tolerant load-balancer.

Installing KeepAlived From Source Code


Before we dive into configuring keepalived, we need to install it. Keepalived is distributed as source code, and is available in several package repositories. To install from source code, you can execute wget or curl to retrieve the source, and then run "configure", "make" and "make install" compile and install the software:

$ wget http://www.keepalived.org/software/keepalived-1.1.17.tar.gz  $ tar xfvz keepalived-1.1.17.tar.gz   $ cd keepalived-1.1.17  $ ./configure --prefix=/usr/local  $ make && make install 

In the example above, the keepalived daemon will be compiled and installed as /usr/local/sbin/keepalived.

Configuring KeepAlived


The keepalived daemon is configured through a text configuration file, typically named keepalived.conf. This file contains one or more configuration stanzas, which control notification settings, the virtual interfaces to manage, and the health checks to use to test the services that rely on the virtual interfaces. Here is a sample annotated configuration that defines two virtual IP addresses to manage, and the individuals to contact when a state transition or fault occurs:

# Define global configuration directives global_defs {     # Send an e-mail to each of the following     # addresses when a failure occurs    notification_email {        matty@prefetch.net        operations@prefetch.net    }    # The address to use in the From: header    notification_email_from root@VRRP-director1.prefetch.net     # The SMTP server to route mail through    smtp_server mail.prefetch.net     # How long to wait for the mail server to respond    smtp_connect_timeout 30     # A descriptive name describing the router    router_id VRRP-director1 }  # Create a VRRP instance  VRRP_instance VRRP_ROUTER1 {      # The initial state to transition to. This option isn't     # really all that valuable, since an election will occur     # and the host with the highest priority will become     # the master. The priority is controlled with the priority     # configuration directive.     state MASTER      # The interface keepalived will manage     interface br0      # The virtual router id number to assign the routers to     virtual_router_id 100      # The priority to assign to this device. This controls     # who will become the MASTER and BACKUP for a given     # VRRP instance.     priority 100      # How many seconds to wait until a gratuitous arp is sent     garp_master_delay 2      # How often to send out VRRP advertisements     advert_int 1      # Execute a notification script when a host transitions to     # MASTER or BACKUP, or when a fault occurs. The arguments     # passed to the script are:     #  $1 - "GROUP"|"INSTANCE"     #  $2 = name of group or instance     #  $3 = target state of transition     # Sample: VRRP-notification.sh VRRP_ROUTER1 BACKUP 100     notify "/usr/local/bin/VRRP-notification.sh"      # Send an SMTP alert during a state transition     smtp_alert      # Authenticate the remote endpoints via a simple      # username/password combination     authentication {         auth_type PASS         auth_pass 192837465     }     # The virtual IP addresses to float between nodes. The     # label statement can be used to bring an interface      # online to represent the virtual IP.     virtual_ipaddress {         192.168.1.100 label br0:100         192.168.1.101 label br0:101     } } 

The configuration file listed above is self explanatory, so I won't go over each directive in detail. I will point out a couple of items:

  • Each host is referred to as a director in the documentation, and each director can be responsible for one or more VRRP instances
  • Each director will need its own copy of the configuration file, and the router_id, priority, etc. should be adjusted to reflect the nodes name and priority relative to other nodes
  • To force a specific node to master a virtual address, make sure the director's priority is higher than the other virtual routers
  • If you have multiple VRRP instances that need to failover together, you will need to add each instance to a VRRP_sync_group
  • The notification script can be used to generate custom syslog messages, or to invoke some custom logic (e.g., restart an app) when a state transition or fault occurs
  • The keepalived package comes with numerous configuration examples, which show how to configure numerous aspects of the server

Starting Keepalived


Keepalived can be executed from an RC script, or started from the command line. The following example will start keepalived using the configuration file /usr/local/etc/keepalived.conf:

$ keepalived -f /usr/local/etc/keepalived.conf 

If you need to debug keepalived issues, you can run the daemon with the "--dont-fork", "--log-console" and "--log-detail" options:

$ keepalived -f /usr/local/etc/keepalived.conf --dont-fork --log-console --log-detail 

These options will stop keepalived from fork'ing, and will provide additional logging data. Using these options is especially useful when you are testing out new configuration directives, or debugging an issue with an existing configuration file.

Locating The Router That is Managing A Virtual IP


To see which director is currently the master for a given virtual interface, you can check the output from the ip utility:

VRRP-director1$ ip addr list br0 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN      link/ether 00:24:8c:4e:07:f6 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.6/24 brd 192.168.1.255 scope global br0     inet 192.168.1.100/32 scope global br0:100     inet 192.168.1.101/32 scope global br0:101     inet6 fe80::224:8cff:fe4e:7f6/64 scope link         valid_lft forever preferred_lft forever  VRRP-director2$ ip addr list br0 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN      link/ether 00:24:8c:4e:07:f6 brd ff:ff:ff:ff:ff:ff     inet 192.168.1.7/24 brd 192.168.1.255 scope global br0     inet6 fe80::224:8cff:fe4e:7f6/64 scope link         valid_lft forever preferred_lft forever 

In the output above, we can see that the virtual interfaces 192.168.1.100 and 192.168.1.101 are currently active on VRRP-director1.

Troubleshooting Keepalived And VRRP


The keepalived daemon will log to syslog by default. Log entries will range from entries that show when the keepalive daemon started, to entries that show state transitions. Here are a few sample entries that show keepalived starting up, and the node transitioning a VRRP instance to the MASTER state:

Jul  3 16:29:56 disarm Keepalived: Starting Keepalived v1.1.17 (07/03,2009) Jul  3 16:29:56 disarm Keepalived: Starting VRRP child process, pid=1889 Jul  3 16:29:56 disarm Keepalived_VRRP: Using MII-BMSR NIC polling thread... Jul  3 16:29:56 disarm Keepalived_VRRP: Registering Kernel netlink reflector Jul  3 16:29:56 disarm Keepalived_VRRP: Registering Kernel netlink command channel Jul  3 16:29:56 disarm Keepalived_VRRP: Registering gratutious ARP shared channel Jul  3 16:29:56 disarm Keepalived_VRRP: Opening file '/usr/local/etc/keepalived.conf'. Jul  3 16:29:56 disarm Keepalived_VRRP: Configuration is using : 62990 Bytes Jul  3 16:29:57 disarm Keepalived_VRRP: VRRP_Instance(VRRP_ROUTER1) Transition to MASTER STATE Jul  3 16:29:58 disarm Keepalived_VRRP: VRRP_Instance(VRRP_ROUTER1) Entering MASTER STATE Jul  3 16:29:58 disarm Keepalived_VRRP: Netlink: skipping nl_cmd msg... 

If you are unable to determine the source of a problem with the system logs, you can use tcpdump to display the VRRP advertisements that are sent on the local network. Advertisements are sent to a reserved VRRP multicast address (224.0.0.18), so the following filter can be used to display all VRRP traffic that is visible on the interface passed to the "-i" option:

$ tcpdump -vvv -n -i br0 host 224.0.0.18 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br0, link-type EN10MB (Ethernet), capture size 96 bytes  10:18:23.621512 IP (tos 0x0, ttl 255, id 102, offset 0, flags [none], proto VRRP (112), length 40) \                 192.168.1.6 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple,                  intvl 1s, length 20, addrs: 192.168.1.100 auth "19283746"  10:18:25.621977 IP (tos 0x0, ttl 255, id 103, offset 0, flags [none], proto VRRP (112), length 40) \                 192.168.1.6 > 224.0.0.18: VRRPv2, Advertisement, vrid 100, prio 100, authtype simple,                  intvl 1s, length 20, addrs: 192.168.1.100 auth "19283746"                          ......... 

The output contains several pieces of data that be useful for debugging problems:

authtype - the type of authentication in use (authentication configuration directive) vrid - the virtual router id (virtual_router_id configuration directive) prio - the priority of the device (priority configuration directive) intvl - how often to send out advertisements (advert_int configuration directive) auth - the authentication token sent (auth_pass configuration directive) 

Conclusion


In this article I described how to set up a host to use the keepalived daemon, and provided a sample configuration file that can be used to failover virtual interfaces between servers. Keepalived has a slew of options not covered here, and I will refer you to the keepalived source code and documentation for additional details



abin 2015-11-01 21:06 发表评论

          Who is Linus Torvalds?        
Linus Benedict Torvalds (born on December 28, 1969 in Helsinki, Finland) is a Finland-Swedish software engineer best known for having initiated the development of the Linux kernel. He later became the chief architect of the Linux kernel, and now acts as the project's coordinator.
          What Is Linux?        
Linux is a generic term referring to Unix-like computer operating systems based on the Linux kernel. Their development is one of the most prominent examples of free and open source software collaboration; typically all the underlying source code can be used, freely modified, and redistributed by anyone under the terms of the GNU GPL and other free licenses. Linux is predominantly known for its use in servers, although it is installed on a wide variety of computer hardware, ranging from embedd…
          How Do I Get Started with Linux?        
If you are new to Linux, you should start by buying or downloading a general-purpose Linux distribution. A distribution is a complete operating system, including the Linux kernel and all the utilities and software you are likely to need, ready to install and use. Most distributions include thousands of software packages, including user-friendly desktops, office suites, and games. There are a handful of major Linux distributions, and as a beginner you are probably safer using one of them. For …
          How can I see my Local Name or Linux Version?        
You can execute the command "uname -a" and it will display something similar to: Linux UGN 2.6.28-11-generic #42-Ubuntu SMP Fri Apr 17 01:57:59 UTC 2009 i686 GNU/Linux Useful information included equates to: "Kernel Name" "Local Name" "Kernel Version" "Distribution Name"
          Arugula Corn Salad with Bacon        
This is a salad of bold flavors, but somehow they all manage to work together well. Sweet corn tossed with peppery arugula, bacon, onion, cumin, and wine vinegar to balance the sweet corn, and you have stimulated all the major tastes your tongue can perceive. If you have a grill, and the time, I highly recommend grilling the corn (in their husks) for this recipe; the smokey flavor just can't be beat.


Arugula Corn Salad with Bacon Recipe
INGREDIENTS
4 large ears of corn
2 cups of chopped arugula (about one bunch)
4 strips of bacon, cooked, chopped
1/3 cup chopped green onions
1 Tbsp olive oil
1 Tbsp white wine vinegar
1/8 teaspoon ground cumin
Salt and freshly ground black pepper to taste

METHOD
1 Cook the corn ears, in their husks, either on the grill for a smokey flavor, or by steaming in a large covered stock pot with an inch of boiling water at the bottom of the pot, for 12-15 minutes. Let the corn cool (can run under cold water to speed up the cooling), remove the husks and silk. I recommend cooking the corn in the husks for the added flavor that the husks impart. If you boil or steam the corn ears after you've already husked them, or if you cook them in the microwave, reduce the cooking time by a few minutes.

2 To remove the kernels from the cobs, stand a corn cob vertically over a large, shallow bowl. Use a sharp knife to make long, downward strokes, removing the kernels from the cob, as you work your way around the cob. Note: it may help to work over a low table, to be in a better ergonomic position to cut the cobs this way.
3 In a medium sized bowl, mix together the corn, chopped arugula, bacon, and onions. In a separate bowl, whisk together the oil, vinegar, salt and pepper, and cumin. Mix dressing into salad just before serving. Taste and add more vinegar if necessary to balance the sweetness of the corn.
Yield: Serves 4.
           CentOS 7 安裝 Nginx、PHP7、PHP-FPM        

  1. 安裝 nginx 
    CentOS 7 沒有內建的 nginx,所以先到 nginx 官網  http://nginx.org/en/linux_packages.html#stable ï¼Œæ‰¾åˆ° CentOS 7 的 nginx-release package 檔案連結,然後如下安裝
    rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
    安裝後,會自動產生 yum 的 repository 設定(在 /etc/yum.repos.d/nginx.repo), 
    接下來便可以使用 yum 指令安裝 nginx
    yum install nginx
  2. 啟動 nginx 
    以前用 chkconfig 管理服務,CentOS 7 改用 systemctl 管理系統服務 
    立即啟動
    systemctl start nginx
    查看目前運作狀態
    systemctl status nginx
    查看 nginx 服務目前的啟動設定
    systemctl list-unit-files | grep nginx
    若是 disabled,可以改成開機自動啟動
    systemctl enable nginx
    若有設定防火牆,查看防火牆運行狀態,看是否有開啟 nginx 使用的 port
    firewall-cmd --state
    永久開放開啟防火牆的 http 服務
    firewall-cmd --permanent --zone=public --add-service=http
    firewall-cmd --reload
    列出防火牆 public 的設定
    firewall-cmd --list-all --zone=public
    經過以上設定,應該就可以使用瀏覽器訪問 nginx 的預設頁面。
  3. 安裝 PHP-FPM 
    使用 yum 安裝 php、php-fpm、php-mysql
    yum install php php-fpm php-mysql
    查看 php-fpm 服務目前的啟動設定 
    systemctl list-unit-files | grep php-fpm
    改成開機自動啟動
    systemctl enable php-fpm
    立即啟動
    systemctl start php-fpm
    查看目前運作狀態
    systemctl status php-fpm
  4. 修改 PHP-FPM listen 的方式 
    若想將 PHP-FPM listen 的方式,改成 unix socket,可以編輯 /etc/php-fpm.d/www.conf 
    將
    listen = 127.0.0.1:9000
    改成
    listen = /var/run/php-fpm/php-fpm.sock
    然後重新啟動 php-fpm
    systemctl restart php-fpm
    註:不要改成 listen = /tmp/php-fcgi.sock (將 php-fcgi.sock 設定在 /tmp 底下), 因為系統產生 php-fcgi.sock 時,會放在 /tmp/systemd-private-*/tmp/php-fpm.sock 隨機私有目錄下, 除非把 /usr/lib/systemd/system/ 裡面的 PrivateTmp=true 設定改成 PrivateTmp=false, 但還是會產生其他問題,所以還是換個位置最方便 


    删除之前的版本

    # yum remove php*

    rpm 安装 Php7 相应的 yum源

    CentOS/RHEL 7.x:

    # rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm # rpm -Uvh https://mirror.webtatic.com/yum/el7/webtatic-release.rpm

    CentOS/RHEL 6.x:
    # rpm -Uvh https://mirror.webtatic.com/yum/el6/latest.rpm

    yum安装php7

    yum install php70w php70w-opcache
    安装其他插件(选装)
    注:如果安装pear,需要安装php70w-devel
    php70w
    php70w-bcmath
    php70w-cli
    php70w-common
    php70w-dba
    php70w-devel
    php70w-embedded
    php70w-enchant
    php70w-fpm
    php70w-gd
    php70w-imap
    php70w-interbase
    php70w-intl
    php70w-ldap
    php70w-mbstring
    php70w-mcrypt
    php70w-mysql
    php70w-mysqlnd
    php70w-odbc
    php70w-opcache
    php70w-pdo
    php70w-pdo_dblib
    php70w-pear
    php70w-pecl-apcu
    php70w-pecl-imagick
    php70w-pecl-xdebug
    php70w-pgsql
    php70w-phpdbg
    php70w-process
    php70w-pspell
    php70w-recode
    php70w-snmp
    php70w-soap
    php70w-tidy
    php70w-xml
    php70w-xmlrp

    编译安装php7

    配置(configure)、编译(make)、安装(make install)

    使用configure --help

    编译安装一定要指定定prefix,这是安装目录,会把所有文件限制在这个目录,卸载时只需要删除那个目录就可以,如果不指定会安装到很多地方,后边删除不方便。

    Configuration: --cache-file=FILE       cache test results in FILE --help                  print this message --no-create             do not create output files --quiet, --silent       do not print `checking...' messages --version               print the version of autoconf that created configure Directory and file names: --prefix=PREFIX         install architecture-independent files in PREFIX [/usr/local] --exec-prefix=EPREFIX   install architecture-dependent files in EPREFIX

    注意
    内存小于1G安装往往会出错,在编译参数后面加上一行内容--disable-fileinfo

    其他配置参数

    --exec-prefix=EXEC-PREFIX
    可以把体系相关的文件安装到一个不同的位置,而不是PREFIX设置的地方.这样做可以比较方便地在不同主机之间共享体系相关的文件
    --bindir=DIRECTORY
    为可执行程序声明目录,缺省是 EXEC-PREFIX/bin
    --datadir=DIRECTORY
    设置所安装的程序需要的只读文件的目录.缺省是 PREFIX/share
    --sysconfdir=DIRECTORY
    用于各种各样配置文件的目录,缺省为 PREFIX/etc
    --libdir=DIRECTORY
    库文件和动态装载模块的目录.缺省是 EXEC-PREFIX/lib
    --includedir=DIRECTORY
    C 和 C++ 头文件的目录.缺省是 PREFIX/include
    --docdir=DIRECTORY
    文档文件,(除 “man(手册页)”以外, 将被安装到这个目录.缺省是 PREFIX/doc
    --mandir=DIRECTORY
    随着程序一起带的手册页 将安装到这个目录.在它们相应的manx子目录里. 缺省是PREFIX/man
    注意: 为了减少对共享安装位置(比如 /usr/local/include) 的污染,configure 自动在 datadir, sysconfdir,includedir, 和 docdir 上附加一个 “/postgresql” 字串, 除非完全展开以后的目录名字已经包含字串 “postgres” 或者 “pgsql”.比如,如果你选择 /usr/local 做前缀,那么 C 的头文件将安装到 /usr/local/include/postgresql, 但是如果前缀是 /opt/postgres,那么它们将 被放进 /opt/postgres/include
    --with-includes=DIRECTORIES
    DIRECTORIES 是一系列冒号分隔的目录,这些目录将被加入编译器的头文件 搜索列表中.如果你有一些可选的包(比如 GNU Readline)安装在 非标准位置,你就必须使用这个选项,以及可能还有相应的 --with-libraries 选项.
    --with-libraries=DIRECTORIES
    DIRECTORIES 是一系列冒号分隔的目录,这些目录是用于查找库文件的. 如果你有一些包安装在非标准位置,你可能就需要使用这个选项 (以及对应的--with-includes选项)
    --enable-XXX
    打开XXX支持
    --with-XXX
    制作XXX模块

    • PHP FPM設定參考
      [global]
      pid = /usr/local/php/var/run/php-fpm.pid
      error_log = /usr/local/php/var/log/php-fpm.log
      [www]
      listen = /var/run/php-fpm/php-fpm.sock
      user = www
      group = www
      pm = dynamic
      pm.max_children = 800
      pm.start_servers = 200
      pm.min_spare_servers = 100
      pm.max_spare_servers = 800
      pm.max_requests = 4000
      rlimit_files = 51200
      
      listen.backlog = 65536
      ;設 65536 的原因是-1 可能不是unlimited
      ;說明 http://php.net/manual/en/install.fpm.configuration.php#104172
      
      slowlog = /usr/local/php/var/log/slow.log
      request_slowlog_timeout = 10
    • nginx.conf 設定參考 
      user  nginx;
      worker_processes  8;
      error_log  /var/log/nginx/error.log warn;
      pid		/var/run/nginx.pid;
      events {
        use epoll;
        worker_connections  65535;
      }
      worker_rlimit_nofile 65535;
      #若沒設定,可能出現錯誤:65535 worker_connections exceed open file resource limit: 1024
      http {
        include	   /etc/nginx/mime.types;
        default_type  application/octet-stream;
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';
        access_log  /var/log/nginx/access.log  main;
        sendfile		on;
        tcp_nopush	 on;
        keepalive_timeout  65;
        server_names_hash_bucket_size 128;
        client_header_buffer_size 32k;
        large_client_header_buffers 4 32k;
        client_max_body_size 8m;
        server_tokens  off;
        client_body_buffer_size  512k;
        # fastcgi
        fastcgi_connect_timeout 300;
        fastcgi_send_timeout 300;
        fastcgi_read_timeout 300;
        fastcgi_buffer_size 64k;
        fastcgi_buffers 4 64k;
        fastcgi_busy_buffers_size 128k;
        fastcgi_temp_file_write_size 128k;
        fastcgi_intercept_errors on;
        #gzip (說明 http://nginx.org/en/docs/http/ngx_http_gzip_module.html)
        gzip  off;
        gzip_min_length  1k;#1k以上才壓縮
        gzip_buffers 32  4k;
          #http://stackoverflow.com/questions/4888067/how-to-get-linux-kernel-page-size-programatically
          #使用 getconf PAGESIZE 取得系統 one memory page size,
        gzip_http_version  1.0;
        gzip_comp_level  2;
        gzip_types  text/css text/xml application/javascript application/atom+xml application/rss+xml text/plain application/json;
          #查看 nginx 的 mime.types 檔案(/etc/nginx/mime.types),裡面有各種類型的定義
        gzip_vary  on;
        include /etc/nginx/conf.d/*.conf;
      }
      
      若出現出現錯誤:setrlimit(RLIMIT_NOFILE, 65535) failed (1: Operation not permitted) 
      先查看目前系統的設定值
      ulimit -n
      若設定值太小,修改 /etc/security/limits.conf
      vi /etc/security/limits.conf
      加上或修改以下兩行設定
      * soft nofile 65535
      * hard nofile 65535





Alpha 2016-08-10 13:44 发表评论

          Gentoo安装配置过程与总结        

前些时间在VMware上安装了Gentoo Linux,用了当前最新版的Gentoo,安装过程记录下来了,但一直没有整理到blog上。今天重新整理一下,写出来与大家分享和备用。接触Gentoo不久,对这个版本还不是很熟。

与其他Linux发行版相比,Gentoo确实有其优势的地方,如内核基于源代码编译,可以自动优化与定制,升级方便等!

关于Gentoo发行版的介绍请看:全球最受欢迎的十大Linux发行版(图)

Host机环境:Win2008 + VMware 7.1

下载安装包

下载安装 CD 和 stage3 包:

http://www.gentoo.org/main/en/where.xml

我用的是 x86平台的:

http://distfiles.gentoo.org/releases/x86/autobuilds/current-iso/

wget -c http://distfiles.gentoo.org/releases/x86/autobuilds/current-iso/install-x86-minimal-20100216.iso

wget -c http://distfiles.gentoo.org/releases/x86/autobuilds/current-iso/stage3-i686-20100216.tar.bz2

wget -c http://distfiles.gentoo.org/snapshots/portage-20100617.tar.bz2

最新的stage3包在这里:http://distfiles.gentoo.org/releases/x86/autobuilds/current-stage3/

开始安装

将安装 CD 插入虚拟机,默认引导进入终端。

先配置好网络,之后的操作可以全部通过 ssh 连接来操作。

ifconfig eth0 192.168.80.133(我这里VM已经自动分配了这个内网IP了。)
echo nameserver 8.8.8.8 > /etc/resolv.conf
echo nameserver 8.8.4.4 > /etc/resolv.conf

设置 root 用户密码:

passwd root

启动 sshd 服务:

/etc/init.d/sshd start

在windows上用SecureCRT或PuTTY连接虚拟机操作。

磁盘分区

先分区,建议使用cfdisk,先查看分区情况:

cfdisk /dev/sda

我的分区表(/boot分区我单独分出来),/dev/sda2是/根分区,/dev/sda3是swap分区:

格式化分区:

mkfs.ext3 /dev/sda1
mkfs.ext3 /dev/sda2
mkswap /dev/sda3

激活swap交换分区:

swapon /dev/sda3

将分区信息写入fstab配置文件:(注:gentoo-minimal没带vi编辑器,只带有nano编辑器。)

nano -w /etc/fstab

写入下面的分区信息:

/dev/sda1 /boot ext3 noauto,noatime 1 2
/dev/sda2 / ext3 noatime 0 1
/dev/sda3 none swap sw 0 0

解压 stage3 和 portage

创建基本目录结构:

mount /dev/sda2 /mnt/gentoo
mkdir /mnt/gentoo/boot
mount /dev/sda1 /mnt/gentoo/boot
cd /mnt/gentoo

使用WinSCP或CuteFTP 上传 stage3 软件包到 /mnt/gentoo下,然后解压:

(注:上面标签的地址之前没改过来,实际地址是192.168.80.133)

tar jxvf stage3-i686-20100608.tar.bz2
rm -f stage3-i686-20100608.tar.bz2

上传 portage 包到 /mnt/gentoo/usr,然后解压:

tar jxvf portage-20100617.tar.bz2
rm -f portage-20100617.tar.bz2

切换系统

cd /
mount -t proc proc /mnt/gentoo/proc
mount -o bind /dev /mnt/gentoo/dev
cp -L /etc/resolv.conf /mnt/gentoo/etc/
chroot /mnt/gentoo /bin/bash
env-update && source /etc/profile

主机域名设置

cd /etc
echo “127.0.0.1 gentoo.at.home gentoo localhost” > hosts
sed -i -e ’s/HOSTNAME.*/HOSTNAME=”gentoo”/’ conf.d/hostname
hostname gentoo

编译安装内核

lsmod

找到网卡驱动模块:

floppy 55736 0
rtc 7960 0
tg3 103228 0
libphy 24952 1 tg3
e1000 114636 0
fuse 59344 0
jfs 153104 0
raid10 20648 0

下载源码,配置内核:

emerge –sync
emerge gentoo-sources
cd /usr/src/linux
make menuconfig

在配置界面输入/e1000,搜索 e1000,找到驱动所在位置:

| Symbol: E1000 [=y]
| Prompt: Intel(R) PRO/1000 Gigabit Ethernet support
| Defined at drivers/net/Kconfig:2020
| Depends on: NETDEVICES && NETDEV_1000 && PCI
| Location:
| -> Device Drivers
| -> Network device support (NETDEVICES [=y])
| -> Ethernet (1000 Mbit) (NETDEV_1000 [=y])

(这里一定要注意,选对内核的网卡驱动!)

虚拟机的硬盘使用的 SCSI 适配器为 LSI Logic。

需要增加对 Fusion MPT base driver 的支持(见 dmesg 日志):

Device Drivers —>
— Fusion MPT device support
<*> Fusion MPT ScsiHost drivers for SPI
<*> Fusion MPT ScsiHost drivers for FC
<*> Fusion MPT ScsiHost drivers for SAS
(128) Maximum number of scatter gather entries (16 – 128)
<*> Fusion MPT misc device (ioctl) driver

必须添加这个驱动,否则系统启动时可能出现类似以下错误:

VFS: Unable to mount root fs via NFS, trying floppy.
VFS: Cannot open root device “sda2”or unknown-block(2,0)
Please append a correct “root=” boot option; here are the available partitions:
0b00 1048575 sr0 driver: sr
Kernel panic – not syncing: VFS: Unable to mount root fs on unknown-block(2,0)

增加对 ext4文件系统的支持:

File systems —>
<*> Second extended fs support
[*] Ext4 extended attributes
[*] Ext4 POSIX Access Control Lists
[*] Ext4 Security Labels
[*] Ext4 debugging support

开始编译内核:

make -j2
make modules_install
cp arch/x86/boot/bzImage /boot/kernel

安装配置 grub

emerge grub

grub
> root (hd0,0)
> setup (hd0)
> quit

编辑启动配置文件grub.conf:

nano -w /boot/grub/grub.conf

grub.conf 内容如下:

default 0
timeout 9

title Gentoo
root (hd0,0)
kernel /boot/kernel root=/dev/sda2

系统配置

文件系统挂载点:

nano -w /etc/fstab
/dev/sda1 /boot ext3 noauto,noatime 1 2
/dev/sda2 / ext3 noatime 0 1
/dev/sda3 none swap sw 0 0

网络设置:

echo ‘config_eth0=( “192.168.80.133″ )’ >> /etc/conf.d/net
echo ‘routes_eth0=( “default via 192.168.80.2″ )’ >> /etc/conf.d/net

SSH服务设置:

rc-update add sshd default

时区设置:

cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
nano -w /etc/conf.d/clock

设置 root 密码:

passwd root

重启,完成安装

exit
umount /mnt/gentoo/dev /mnt/gentoo/proc /mnt/gentoo/boot /mnt/gentoo
reboot

如图,Gentoo启动成功:

OK,完成!

附 make.conf

CFLAGS="-march=native -O2 -pipe -fomit-frame-pointer -mmmx -msse -msse2"
CXXFLAGS
="${CFLAGS}"
MAKEOPTS
="-j5"
CHOST
="x86_64-pc-linux-gnu"
USE="jpeg ssl nls unicode cjk zh nptl nptlonly mmx sse sse2 -X -gtk -gnome \
     sasl maildir imap libwww mysql xml sockets vhosts snmp \
     -lvm -lvm1 -kde -qt -cups -alsa -apache
"
ACCEPT_KEYWORDS
="~amd64"
LINGUAS
="zh_CN"
SYNC
="rsync://rsync.asia.gentoo.org/gentoo-portage"
GENTOO_MIRRORS
="http://mirrors.163.com/gentoo ftp://gg3.net/pub/linux/gentoo"

VIDEO_CARDS
="vesa"

ALSA_CARDS
=""
ALSA_PCM_PLUGINS
=""
APACHE2_MODULES
=""
QEMU_SOFTMMU_TARGETS
="i386 x86_64"
QEMU_USER_TARGETS
="i386 x86_64"


参考文档  http://www.gentoo.org/doc/zh_cn/handbook/handbook-amd64.xml

参考:http://www.ha97.com/







Alpha 2011-06-28 16:03 发表评论

          RT/embedded OS use poll        
Greetings,

I am examining a case for a new RT/embedded OS in today's situation. I would like to get a simple YES/NO answer from embedded developers.

Suppose that you are starting a new project. You are considering to use embedded Linux and you offered an RT/embedded OS by a vendor John Doe Inc. The RTOS is proposed in complete source code, uses GNU toolchain to build and development license is free. Production license cost is negligible for your purpose and support costs the same as support from embedded Linux vendors. In other words, John Doe's offer comes even to embedded Linux with regard to source code, development and license costs.

Let's also suppose that John Doe's RTOS offers come even with Linux in ready code availability (John Doe Inc. ported to its OS open-source libraries that you need).

Now you have to consider pros and cons of the new RTOS option compared to Linux (we assume that you know Linux pros and cons pretty well).

Pros:

* Smaller footprint, better configurability
* More system resources left free
* Better performance of OS kernel and services
* Real-time responsiveness
* Simpler build and maintenance
* Simpler programming

Cons:

* Worse support (supported only by John Doe Inc. and its distributors, no match to Linux community)
* Established bad reputation of dedicated RT/embedded OSes

The question is: would you give a try for John Doe's RT/embedded OS?

Thanks,
Daniel
          That Reminds Me of a Story...        

We caught fish.
More than that,
We made stories.

Stories that we’ve told over and over.
Stories that make us laugh with every telling.
Stories we will continue to tell, over and over,
As long as we’re here to tell them.
Stories that will keep you with us forever,
Now that you’re gone.

Some true.
Some with a kernel of truth.
Some we’ve made true in the telling.
It’s hard to remember which are which any more,
As if it really mattered.

We gathered together tonight and told them again.
Set aside the vises, the hooks and the feathers,
And, instead, tipped a glass or two.
Told the stories one more time.
Laughed with you as if you were here,
When, in truth, you were.
In the stories.

It may have started with fish,
But not a single tale tells of the catch.
They tell of falling overboard,
Of getting shit-faced,
Of putting our foot in our mouths at the worst of times.
They tell of broken rods, bent transoms, and anchors tossed overboard unattached.
Too many are poop or fart stories, I'm embarrassed to say.
Funny at six and at sixty. Boys will always be boys.
They make us laugh at our ourselves and we deserve it.
No one is spared,
For they are our stories,
Yours and ours.

Yes, we caught fish.
More than that,
Much more than that,
We made stories.


          A little Valentines Day love for KERNEL PANIC        
As mentioned in my last post here (two in one day after years of bloody silence? Amazing…) I have a profound need to be LOUD. So here I am, being loud about my short film KERNEL PANIC, which I have … Continue reading
          Troubleshooting Kernel Panic – How I got it wrong        
Recently, I’ve shot, edited and delivered my first narrative short film, a sci-fi comedy called KERNEL PANIC. Overall, the response has been positive. Most people found it rather funny and (for a low budget short) well executed. It’s enjoyed a … Continue reading
          All in a panic – Why I’m not a fan of 48 hour film challenges        
Earlier this year, myself and my friend Adam Brown completed work on a short film called KERNEL PANIC. The film was made as an intended entrant for a sci fi themed 48 hour film challenge. Adam, who produced, has entered … Continue reading
          Check Corn for Yield Limiting Factors        

Although planting might seem as if it were ages ago, its effects can show up now. While scouting corn in addition to checking ear size and kernel count, take a look at stand, roots and stalks, and re-examine your ears to determine what went right—or wrong—earlier this year. Continue reading here!

The post Check Corn for Yield Limiting Factors appeared first on Axis Seed.


          Anthon Berg Apricot in Brandy Marzipan        
Anthon Berg Apricot in Brandy Marzipan

Anthon Berg Apricot in Brandy Marzipan

Ingredients: Sugar, Cocoa Mass, Almonds (11%), Apricot Kernels, Apricots (10%), Glucose Syrup, Alcohol, Cocoa Butter, Brandy (0.9%), Milk Fat, Emulsifier (Rapeseed Lecithins), Thickener (Pectin), Flavouring, Acid (Citric Acid), Preservative (Sorbic Acid, Potassium Sorbate). Minimum 50% cocoa solids in the chocolate. Alcohol 1.6% Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✖ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Anthon Berg Blueberry in Vodka Marzipan        
Anthon Berg Blueberry in Vodka Marzipan

Anthon Berg Blueberry in Vodka Marzipan

Ingredients: Sugar, Cocoa Mass, Blueberries 13%, Almonds (11%), Apricot Kernels, Apricots (9%), Glucose Syrup, Vodka 3%, Cocoa Butter, Milk Fat, Thickener (Pectin), Emulsifier (Rapeseed Lecithin), Flavouring, Acid (Citric Acid), Preservatives (Sorbic Acid, Potassium Sorbate), Natural Flavouring. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✖ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Anthon Berg Cherry in Rum Marzipan        
Anthon Berg Cherry in Rum Marzipan

Anthon Berg Cherry in Rum Marzipan

Ingredients: Sugar, Cocoa Mass, Cherries (12%), Almonds (11%), Apricot Kernels, Glucose Syrup, Spirits (Rum (0.5%), Cocoa Butter, Milk Fat, Alcohol, Emulsifier (Rapeseed Lecithins), Thickener (Pectin), Preservative (Sorbic Acid/Potassium Sorbate) Acid (Citric Acid). Minimum 50% cocoa solids in the chocolate. Alcohol 1.4% Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✖ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Anthon Berg Plum in Madeira Marzipan        
Anthon Berg Plum in Madeira Marzipan

Anthon Berg Plum in Madeira Marzipan

Ingredients: Sugar, Plums (15%), Cocoa Mass, Almonds (11%), Apricot Kernels, Glucose Syrup, Cocoa Butter, Liqueur Wine (Madeira 1.5%), Alcohol, Milk Fat, Emulsifier (Rapeseed Lecithins), Thickener (Pectin), Preservative (Sorbic Acid/Potassium Sorbate), Acid (Citric Acid). Minimum 50% cocoa solids in the chocolate. Alcohol 1.5% Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✖ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Anthon Berg Raspberry in Orange Liqueur Marzipan        
Anthon Berg Raspberry in Orange Liqueur Marzipan

Anthon Berg Raspberry in Orange Liqueur Marzipan

Ingredients: Sugar, Cocoa Mass, Almonds (11%), Apricot Kernels, Apricots (9%), Raspberries 10%, Glucose Syrup, Cocoa Butter, Alcohol, Milk Fat, Orange Liqueur (Liqueur Grand Marnier 0.7%), Emulsifier (Rapeseed Lecithin), Thickener (Pectin), Acid (Citric Acid), Preservative (Sorbic Acid, Potassium Sorbate). Minimum 50% cocoa solids in the chocolate. Alcohol 1.6% Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✖ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Anthon Berg Strawberry in Champagne Marzipan        
Anthon Berg Strawberry in Champagne Marzipan

Anthon Berg Strawberry in Champagne Marzipan

Ingredients: Sugar, Cocoa Mass, Almonds (11%), Apricot Kernels, Glucose Syrup, Strawberries (10%), Cocoa Butter, Wines (Champagne (1.5%), White Wine), Alcohol, Milk Fat, Flavouring, Emulsifier (Rapeseed Lecithins), Thickener (Pectin), Acid (Citric Acid), Preservative (Sorbic Acid/Potassium Sorbate). Minimum 50% Cocoa Solids in the Chocolate. Alcohol: 1.6%. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✖ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Blackcurrant & Liquorice        
Blackcurrant & Liquorice

Blackcurrant & Liquorice

Ingredients: Sugar, Glucose Syrup, Acid (Citric Acid), Vegetable Fat (Palm, Kernel, Rapeseed), Condensed Skimmed Milk, Emulsifier (Soya Lecithin), Salt, Flavourings, Colours: Vegetable Carbon, E122, E133. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✔ Contains Artificial Flavours ✔


          Blissfully Boozy Chocolates - 15 Chocolates        
Blissfully Boozy Chocolates - 15 Chocolates

Blissfully Boozy Chocolates - 15 Chocolates

Ingredients: Sugar, Cocoa Butter, Whole Milk Powder, Cocoa Mass, Water, Liquid Invert Sugar Syrup, Glucose Syrup, Evaporated Milk, Marc de Champagne, Whisky, Rum, Orange Liqueur, Butter (Milk), Cream (Milk), Alcohol, Vegetable Oils (Palm Oil, Coconut Oil, Palm Kernel Oil), Glucose-Fructose Syrup, Sorbitol, Emulsifier (Soya Lecithin, Sunflower Lecithin), Skimmed Milk Powder, Anhydrous Milk Fat, Butter Oil (Milk), Flavourings, Low Fat Cocoa Powder, Strawberry Flavour, Natural Vanilla Flavour, Natural Orange Oil, Colour: E120; Salt, Acidity Regulator: E330.   Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Blissfully Boozy Chocolates - 24 Chocolates        
Blissfully Boozy Chocolates - 24 Chocolates

Blissfully Boozy Chocolates - 24 Chocolates

Ingredients: Sugar, Cocoa Butter, Whole Milk Powder, Cocoa Mass, Water, Liquid Invert Sugar Syrup, Glucose Syrup, Evaporated Milk, Marc de Champagne, Whisky, Rum, Orange Liqueur, Butter (Milk), Cream (Milk), Alcohol, Vegetable Oils (Palm, Coconut, Palm Kernel), Glucose-Fructose Syrup, Sorbitol, Emulsifier (Soya Lecithin, Sunflower Lecithin), Skimmed Milk Powder, Anhydrous Milk Fat, Butter Oil (Milk), Flavourings, Low Fat Cocoa Powder, Strawberry Flavour, Natural Vanilla Flavour, Natural Orange Oil, Colour E120, Salt, Acidity Regulator: E330.   Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Budweiser Gift Box        
Budweiser Gift Box

Budweiser Gift Box

Lindt Lindor Truffles: Sugar, Vegetable Fats (Coconut, Palm Kernel), Cocoa Butter, Cocoa Mass, Whole Milk Powder, Skimmed Milk Powder, Lactose, Anhydrous Milk Fat, Emulsifier (Soya Lecithin), Barley Malt Extract, Flavourings, Vanilla. Milk Chocolate contains Cocoa Solids: 31% Minimum, Milk Solids: 20% Minimum. Milk Chocolate Breasts: Cocoa Butter, Whole Milk Powder, Cocoa Mass, Emulsifier: Soya Lecithin, Natural Vanilla Flavouring, Sugar. Cocoa Solids: 34% Minimum; Milk Solids: 22% Minimum. Milk Chocolate iPhone Replica: Sugar, Whole Milk Powder, Cocoa Butter, Cocoa Mass, Modified Starches (E1422, E1412), Emulsifier: Soya Lecithin; Maltodextrin, Aroma: Natural Vanilla; Dextrose, Glycerine, Water, Stabilisers (E414, E460i), Emulsifiers (E471, E491, E435, E330), Sweetner: Glucose; Aroma: Vanillin; Preservatives (E202, E330), Food Colour: E171, E133, E104, E122, E124, E151, E110. Cocoa Solids: 30% Minimum; Milk Solids: 14% Minimum. This product contains azo-food colours. May have adverse effects on activity and attention in children. Pint Pot Jelly Sweets: Glucose Syrup, Sugar, Corn Starch, Gelatine, Lactic Acid, Flavourings, Gelling Agent (Pectins), Vegetable Oils: (Coconut, Palm Kernel), Glazing Agent: Carnauba Wax, Beeswax; Colours: E150a & E171. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✔ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✔ Contains Artificial Flavours ✔


          Cappuccino Cupcake Chocolates        
Cappuccino Cupcake Chocolates

Cappuccino Cupcake Chocolates

Ingredients: Sugar, Cocoa Butter, Whole Milk Powder, Vegetable Fats (Palm. Rapeseed, Sunflower), Skimmed Milk Powder, Invert Sugar, Cream, Lactose, Glucose Syrup, Cocoa Mass, Stabiliser (E422), Emulsifier (Soya Lecithin), Ethyl Alcohol 96% Vol., Glucose, Vegetable Oils (Palm Kernel, Palm, Sunflower), Wheat Starch, Natural Vanilla Flavouring, Roasted Coffee Bean, Soya Flour, Flavouring, Anhydrous Milk Fat. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Cappuccino Truffles        
Cappuccino Truffles

Cappuccino Truffles

Ingredients: Sugar, Vegetable Fat (Palm, Palm Kernel, Rapeseed), Cocoa Butter, Whole Milk Powder, Cocoa Mass, Skimmed Milk Powder, Lactose (Milk), Whey Powder (Milk), Dextrose, Ground Coffee, Emulsifier (Sunflower Lecithin, Soya Lecithin), Anhydrous Milk Fat, Flavourings. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Caramel Collection        
Caramel Collection

Caramel Collection

Ingredients: Sugar, Cocoa Butter, Whole Milk Powder, Cocoa Mass, Butter (Milk), Milk Fat, Vegetable Fat, Glucose Syrup, Skimmed Milk Powder, Caramelized Sugar, Sweetened Condensed Milk, Evaporated Milk, Wheat Flour, Partially Inverted Sugar Syrup, Skimmed Milk, Hydrogenated Vegetable Fat (Coconut), Vegetable Oils (Coconut, Sunflower, Palm Kernel, Rapeseed, Palm), Salt, Glucose-Fructose Syrup, Emulsifiers (E322 (Soya), E471), Cream (Milk), Stabilisers (Pectin, E422), Sorbitol, Amaretto Biscuit (Sugar, Wheat Starch, Wheat Flour, Bitter Apricot Kernels, Egg Albumen, Colouring E150b, Flavour, Baking Powder), Dextrose, Alcohol, Water, Natural Vanilla, Peppermint Oil, Raising Agents (E503, E500), Flavourings. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✔ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Chilli Fudge        
Chilli Fudge

Chilli Fudge

Ingredients: Sugar, Glucose Syrup, Sweetened Condensed Milk, Butter, Hydrogenated Palm Kernel Oil, Chilli (0.8%), Fondant (Sugar, Glucose, Water), Salt. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Chocolate Covered Peanut Brittle Jar        
Chocolate Covered Peanut Brittle Jar

Chocolate Covered Peanut Brittle Jar

Ingredients:  Sugar, Peanuts (42%) , Water, Milk Chocolate (14%) (Sugar,Cocoa Butter, Milk Solids, Cocoa Mass, Whey Powder, Emulsifier: Soya Lecithin, Natural Flavouring), Non Hydrogenated Vegetable Fat (Palm Oil, Palm Kernel Oil), Cream of Tartar. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Chocolate Fruits        
Chocolate Fruits

Chocolate Fruits

Ingredients: Sugar, Glucose Syrup, Acid (Citric Acid), Vegetable Fats (Palm, Palm Kernel, Rapeseed), Chocolate (Cocoa Mass, Sugar, Emulsifier (Soya Lecithin), Flavouring) Colours: E102, E104, E110, E122, E129, E133, E142. Allergen & Dietary Advice: Suitable for Vegans ✔ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✖ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✔ Contains Artificial Flavours ✔


          Chocolate Lovers Gift Box        
Chocolate Lovers Gift Box

Chocolate Lovers Gift Box

Signature Chocolate Box: Sugar, Full Cream Milk Powder, Cocoa Butter, Cocoa Mass, Sweetened Condensed Skimmed Milk, Hazelnuts, Almonds, Vegetable Fat (Palm, Sunflower, Rapeseed, Cottonseed), Butter (Milk), Maple Syrup, Glucose Syrup, Invert Sugar Syrup, Cherry Soaked in Liqueur 24% Vol, Kirsch 60% Vol, Marc de Champagne, Honey, Glucose-Fructose Syrup, Skimmed Milk Powder, Cream (Milk), Sugar Cane, Emulsifier (Soya Lecithin, Sunflower Lecithin), Low Fat Cocoa Powder, Vegetable Oil (Coconut, Palm Kernel, Sunflower), Lactose, Dextrose, Alcohol, Acidity Regulator: E330, Natural Flavouring, Natural Vanilla Flavouring, Anhydrous Milk Fat, Water, Salt, Pistachios, Colour (E100, E160a). White Chocolate Raspberries Gift Bag: Sugar, Cocoa Butter, Whole Milk Powder, Freeze Dried Raspberries (9%) Glazing Agent: Gum Arabic, Shellac; Emulsifier: Soya Lecithin, Natural Vanilla Extract. Milk Chocolate Honey Buttons: Sugar, Cocoa Butter, Whole Milk Powder, Cocoa Mass, Honey Powder (Honey (60%) Maltodextrin) (1.5%), Emulsifier: Soya Lecithin; Natural Vanilla Flavouring. Milk & Dark Chocolate Spoons: Chocolate Liquor, Sugar, Cocoa Butter, Milk Powder, Emuslifier (Soya Lecithin), Artificial Flavour (Vanillin). Chocolate Continental Nougat: Peeled Almonds, Dark Chocolate 30%, (Cocoa Mass, Sugar , Cocoa, Soya Lecithin. Contains min.70% Cocoa Solids), Sugar, Glucose Syrup (Sulphite), Egg White, Wafer Paper (Natural Potato Starch, Sunflower Oil). Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✔ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Coconut Fudge Ice        
Coconut Fudge Ice

Coconut Fudge Ice

Ingredients: Sugar, Glucose Syrup (Sulphites), Dessicated Coconut, Vegetable Oil (Palm Stearin Oil, Palm Kernel Oil), Flavourings, Colour: E122. Fudge: Sugar, Glucose Syrup (Sulphites), Full Cream Condensed Milk, Butter (Milk), Vegetable Oil (Palm Stearin Oil, Palm Kernel Oil) Salt, Emulsifier: E322 (Soya Lecithin); Flavourings. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✔ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✔ Contains Artificial Flavours ✔


          Coconut Ice        
Coconut Ice

Coconut Ice

Ingredients: Sugar, Glucose Syrup (Sulphites), Dessicated Coconut, Vegetable Oil (Palm Stearin Oil, Palm Kernel Oil), Flavourings, Colour: E122. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✔ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✔ Contains Artificial Flavours ✔


          Coconut Truffles        
Coconut Truffles

Coconut Truffles

Ingredients: Sugar, Cocoa Butter, Coconut, Whole Milk Powder, Water, Liquid Sugar, Batida De Coco 16% Vol., Liquid Invert Sugar Syrup, Glucose Syrup, Palm Oil, Alcohol, Coconut oil, Emulsifier (E322 Soya Lecithin, E322 Sunflower Lecithin), Flavourings, Palm Kernel Oil, Sunflower Oil, Salt, Skimmed Milk Powder, Acid (E330 Citric Acid). Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Crème Brûlée Chocolates        
Crème Brûlée Chocolates

Crème Brûlée Chocolates

Ingredients: Sugar, Vegetable Fat (Palm, Palm Kernel, Rapeseed), Whole Milk Powder, Cocoa Butter, Cocoa Mass, Skimmed Milk Powder, Lactose, Whey Powder, Dextrose, Emulsifier (Sunflower Lecithin), Flavourings. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Dandelion Mug        
Dandelion Mug

Dandelion Mug

Ingredients: Sugar, Vegetable Fats: Coconut, Palm Kernel; Cocoa Butter, Cocoa Mass, Whole Milk Powder, Skimmed Milk Powder, Lactose, Anhydrous Milk Fat, Emulsifier: Soya Lecithin; Barley Malt Extract, Flavourings. Cocoa Solids: 31% Min.; Milk Solids: 20% Min. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Dark Chocolate Salted Caramels        
Dark Chocolate Salted Caramels

Dark Chocolate Salted Caramels

Ingredients:  Sugar, Glucose Syrup, Cocoa Butter, Whole Milk Powder, Coconut Oil, Hazelnuts, Skimmed Milk, Cocoa Mass, Dextrose, Alcohol, Salt, Sunflower Oil, Water, Emulsifiers (E471 Mono- and Diglycerides Of Fatty Acids, E322, Soya Lecithin), Flavourings, Palm Kernel Oil, Skimmed Milk Powder, Rapeseed Oil, Palm Oil, Milk Fat (Anhydrous). Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Exquisite White Chocolates - 15 Chocolates        
Exquisite White Chocolates - 15 Chocolates

Exquisite White Chocolates - 15 Chocolates

Ingredients: Sugar, Cocoa Butter, Whole Milk Powder, Butter (Milk), Evaporated Milk, Cream (Milk), Vegetable Oils (Palm, Coconut, Palm Kernel, Sunflower), Liquid Sugar, Liquid Invert Sugar Syrup, Marc de Champagne, Coconut, Batida de Coco 16% vol, Coffee, Glucose, Glucose-Fructose Syrup, Sorbitol, Emulsifiers (Soya Lecithin, Sunflower Lecithin), Skimmed Milk Powder, Alcohol, Natural Vanilla Flavour, Strawberry Flavour, Salt, Acidity Regulator: E330, Colour: E120.   Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✖ Contains Other Nuts or Possible Traces ✖ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Exquisite White Chocolates - 24 Chocolates        
Exquisite White Chocolates - 24 Chocolates

Exquisite White Chocolates - 24 Chocolates

Ingredients: Sugar, Cocoa Butter, Whole Milk Powder, Butter (Milk), Evaporated Milk, Cream (Milk), Vegetable Oils (Palm, Coconut, Palm Kernel, Sunflower), Liquid Sugar, Liquid Invert Sugar Syrup, Marc de Champagne, Coconut, Batida de Coco 16% vol, Coffee, Glucose, Glucose-Fructose Syrup, Sorbitol, Emulsifiers (Soya Lecithin, Sunflower Lecithin), Skimmed Milk Powder, Alcohol, Natural Vanilla Flavour, Strawberry Flavour, Salt, Acidity Regulator: E330, Colour E120.   Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✖ Contains Other Nuts or Possible Traces ✖ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Glitter Trinket Box        
Glitter Trinket Box

Glitter Trinket Box

Ingredients: Sugar, Vegetable Fats: Coconut, Palm Kernel; Cocoa Butter, Cocoa Mass, Whole Milk Powder, Skimmed Milk Powder, Lactose, Anhydrous Milk Fat, Emulsifier: Soya Lecithin; Barley Malt Extract, Flavourings. Cocoa Solids: 31% Min.; Milk Solids: 20% Min. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✖  


          Golden Ale Gift Box        
Golden Ale Gift Box

Golden Ale Gift Box

Chocolate Tool Kit: Cocoa Butter, Whole Milk Powder, Cocoa Mass, Emulsifier: Soya Lecithin, Natural Vanilla Flavouring, Sugar. Cocoa Solids: 34% Min.; Milk Solids: 22% Min. Black Treacle Toffee: Sugar, Black Treacle (21%), Glucose Syrup, Butter, Salt. Assorted Chocolate Brazils: Sugar, Brazils, Cocoa Mass, Cocoa Butter, Full Cream Milk Powder, Glazing Agent: Gum Arabic, Emulsifier: Soya Lecithin, Vanilla Extract. Jelly Pint Pots: Glucose Syrup, Sugar, Corn Starch, Gelatine, Lactic Acid, Flavourings, Gelling Agent (Pectins), Vegetable Oils: (Coconut, Palm Kernel), Glazing Agent: Carnauba Wax, Beeswax; Colours: E150a & E171. Butter Fudge Tablet: Sugar, Butter, Evaporated Milk, Glucose Syrup. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✖ Contains Other Nuts or Possible Traces ✖ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Grumpy Old Man Mug        
Grumpy Old Man Mug

Grumpy Old Man Mug

Ingredients: Glucose Syrup, Sugar, Corn Starch, Gelatine, Lactic Acid, Flavourings, Gelling Agent, Pectins, Vegetable Oils: (Coconut, Palm Kernel), Glazing Agent: Carnuba Wax, Beeswax, Colours: E150a & E171. Contains Gelatine. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✖ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✖ Contains Other Nuts or Possible Traces ✖ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Irish Cream Truffles        
Irish Cream Truffles

Irish Cream Truffles

Ingredients:  Sugar, Cocoa Butter, Whole Milk Powder, Water, Cocoa Mass, Liquid Invert Sugar Syrup, Glucose Syrup, Alcohol, Palm Oil, Coconut Oil, Flavourings, Emulsifier (E322, Soya Lecithin, Sunflower Lecithin), Low Fat Cocoa Powder, Palm Kernel Oil, Sunflower Oil, Milk Fat (Anhydrous), Salt, Skimmed Milk Powder, Acid (E330 Citric Acid). Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Luxury Irish Cream Truffles - 24 Chocolates        
Luxury Irish Cream Truffles - 24 Chocolates

Luxury Irish Cream Truffles - 24 Chocolates

Ingredients: Sugar, Cocoa Butter, Whole Milk Powder, Cocoa Mass, Water, Liquid Invert Sugar Syrup, Glucose Syrup, Alcohol, Palm Oil, Coconut Oil, Flavourings, Emulsifiers (Soya Lecithin, Sunflower Lecithin), Low Fat Cocoa Powder, Palm Kernel Oil, Sunflower Oil, Milk Fat (Anhydrous), Salt, Skimmed Milk Powder, Acidity Regulator: E330. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Milk Chocolate Coconut Ice        
Milk Chocolate Coconut Ice

Milk Chocolate Coconut Ice

Ingredients: Sugar, Glucose Syrup (Sulphites), Desiccated Coconut, Vegetable Oil (Palm Stearin Oil, Palm Kernel Oil), Flavourings, Colour: E122; Beetroot Red. Milk Chocolate: Sugar, Cocoa Butter, Whole Milk Powder, Cocoa Mass, Emulsifier: Soya Lecithin: E322; Natural Vanilla Flavouring. Contains Cocoa Solids: 34%Min.; Milk Solids: 20% Min. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✔ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✔ Contains Artificial Flavours ✔


          Milk Chocolate Covered Fudge        
Milk Chocolate Covered Fudge

Milk Chocolate Covered Fudge

Ingredients: White Sugar, Sweetened Condensed Milk , Glucose, Milk Chocolate (16%), Fondant (Sugar, Glucose, Water), Butter (4%), Hydrogenated Palm Kernel Oil, Vanilla Flavouring, Mono Sodium Glutamate, Salt. Milk Chocolate Contains Cocoa Solids: 36% Minimum; Milk Solids: 14% Minimum. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Milk Chocolate Covered Nougat        
Milk Chocolate Covered Nougat

Milk Chocolate Covered Nougat

Ingredients: Sugar, Glucose Syrup (Sulphites), Egg Albumen, Vegetable Oil (Palm Stearin Oil, Palm Kernel Oil), Flavourings & Colourings E122. Milk Chocolate: Sugar, Cocoa Liquor, Cocoa Butter, Fat Reduced Powder, Whole Milk Powder, Skimmed Milk Powder, Whey Powder (Milk), Cream powder (Milk), Milk Fat, Lactose, Emulsifier: Soya Lecithin E322. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✔ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✔ Contains Artificial Flavours ✔


          No Added Sugar Belgian Chocolates 125g - 10 Chocolates        
No Added Sugar Belgian Chocolates 125g - 10 Chocolates

No Added Sugar Belgian Chocolates 125g - 10 Chocolates

Ingredients: Sweeteners (Maltitol and Lactitol), Cocoa Butter, Cocoa Mass, Full Milk Powder, Hazelnuts, Vegetable Fats (Palm, Palm Kernel, Sunflower, Rapeseed, Cottonseed), Alimentary Fibre (Inulin), Skimmed Milk Powder, Whey Powder, Almonds, Skimmed Yoghurt Powder, Wheat Flour, Shredded Coconut , Emulsifier (Soya Lecithin), Pistachios, Rice Crisp (Rice Flour), Natural Flavourings, Fat Reduced Cocoa, Dehydrated Strawberries, Cointreau, Coffee, Natural colourings (E100, E163). Dark Chocolate Contains: Cocoa Solids: 55% Minimum; Milk Chocolate Contains Cocoa Solids: 37% Minimum; Milk Solids: 23% Minimum; White Chocolate Contains Cocoa Solids: 32% Minimum; Milk Solids: 24% Minimum. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          No Added Sugar Belgian Seashells - 10 Chocolates        
No Added Sugar Belgian Seashells - 10 Chocolates

No Added Sugar Belgian Seashells - 10 Chocolates

Ingredients: Cocoa Butter, Full Milk Powder, Alimentary Fibres (Dextrin, Inulin, Oligofructose), Cocoa Mass, Whey Powder (Milk), Sweeteners (Erythritol, Steviol Glycosides), Hazelnuts (8,4%), Vegetable Fats (Palm, Palm Kernel), Skimmed Milk Powder, Emulsifier (Soya Lecithin), Natural Flavourings. Milk chocolate cocoa solids: 36% minimum, Milk solids: 30% minimum; White chocolate cocoa solids: 44% minimum, Milk solids: 40% minimum. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Pint Pot Sweets Jar        
Pint Pot Sweets Jar

Pint Pot Sweets Jar

Ingredients: Glucose Syrup, Sugar, Corn Starch, Gelatine, Lactic Acid, Flavourings, Gelling Agent, Pectins, Vegetable Oils: (Coconut, Palm Kernel), Glazing Agent: Carnuba Wax, Beeswax; Colours: E150a & E171. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Red Wine & Chocolate Gift Box        
Red Wine & Chocolate Gift Box

Red Wine & Chocolate Gift Box

Ingredients: Sugar, Hazelnuts, Cocoa Butter, Whole Milk Powder, Cocoa Mass, Glucose Syrup, Invert Sugar Syrup, Vegetable Oils and Fats: Palm, Palm Kernel, Coconut, Sunflower, Rapeseed; Skimmed Milk Powder, Dextrose, Milk Fat (Anhydrous), Low Fat Cocoa Powder, Water, Salt, Acidity Regulator: Citric Acid; Alcohol, Spirit Drink: Rum, Whiskey, Marc de Champagne, Amaretto; Emulsifier: E322 Soya Lecithin, E322 Sunflower Lecithin; Humectant: E420i Sorbitol; Egg, Flavourings, Colourings: Paprika Extract, E120 Cochineal, E160e; C1 Food Orange Preservative: E202 Potassium Sorbate. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Salted Caramel Collection        
Salted Caramel Collection

Salted Caramel Collection

Ingredients: Sugar, Cocoa Butter, Whole Milk Powder, Cocoa Mass, Butter (Milk), Milk Fat, Vegetable Fat, Glucose Syrup, Skimmed Milk Powder, Coconut Oil, Sweetened Condensed Milk, Evaporated Milk, Wheat Flour, Partially Inverted Sugar Syrup, Skimmed Milk, Hydrogenated Vegetable Fat (Coconut), Vegetable Oils (Sunflower, Palm Kernel, Rapeseed, Palm), Cream (Milk), Salt, Glucose-Fructose Syrup, Emulsifiers (Soya Lecithin, Sunflower Lecithin, E471), Stabilisers (Pectin, E422), Dextrose, Alcohol, Water, Natural Vanilla, Raising Agent (E503, E500), Flavourings. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          Signature Chocolate Box - 15 Chocolates        
Signature Chocolate Box - 15 Chocolates

Signature Chocolate Box - 15 Chocolates

Ingredients: Sugar, Full Cream Milk Powder, Cocoa Butter, Cocoa Mass, Sweetened Condensed Skimmed Milk, Hazelnuts, Almonds, Vegetable Fats (Palm, Sunflower, Rapeseed, Cottonseed,), Butter (Milk), Maple Syrup, Glucose Syrup, Invert Sugar Syrup, Cherry Soaked in Liqueur 24% vol, Kirsch 60% vol, Marc de Champagne, Honey, Glucose-Fructose Syrup, Skimmed Milk Powder, Cream (Milk), Sugar Cane, Emulsifier (Soya Lecithin, Sunflower Lecithin), Low Fat Cocoa Powder, Vegetable Oil (Coconut, Palm Kernel, Sunflower), Lactose, Dextrose, Alcohol, Acidity Regulator: E330; Natural Flavouring, Natural Vanilla Flavouring, Anhydrous Milk Fat, Water, Salt, Pistachios, Colours (E100, E160a). Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Signature Chocolate Box - 24 Chocolates        
Signature Chocolate Box - 24 Chocolates

Signature Chocolate Box - 24 Chocolates

Ingredients: Sugar, Full Cream Milk Powder, Cocoa Butter, Cocoa Mass, Sweetened Condensed Skimmed Milk, Hazelnuts, Almonds, Vegetable Fats (Palm, Sunflower, Rapeseed, Cottonseed,), Butter (Milk), Maple Syrup, Glucose Syrup, Invert Sugar Syrup, Cherry Soaked in Liqueur 24% vol, Kirsch 60% vol, Marc de Champagne, Honey, Glucose-Fructose Syrup, Skimmed Milk Powder, Cream (Milk), Sugar Cane, Emulsifiers (Soya Lecithin, Sunflower Lecithin), Low Fat Cocoa Powder, Vegetable Oils (Coconut, Palm Kernel, Sunflower), Lactose, Dextrose, Alcohol, Acidity Regulator: E330, Natural Flavouring, Natural Vanilla Flavouring, Anhydrous Milk Fat, Water, Salt, Pistachios, Colours (E100, E160a).   Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Signature Chocolate Box - 48 Chocolates        
Signature Chocolate Box - 48 Chocolates

Signature Chocolate Box - 48 Chocolates

Ingredients: Sugar, Full Cream Milk Powder, Cocoa Butter, Cocoa Mass, Whole Milk Powder, Sweetened Condensed Skimmed Milk, Evaporated Milk, Vegetable Fats (Palm, Coconut, Sunflower, Rapeseed, Cottonseed), Butter (Milk), Glucose Syrup, Invert Sugar Syrup, Milk Fat, Skimmed Milk Powder, Cream (Milk), Vegetable Oils (Coconut, Palm, Rapeseed, Palm Kernel, Sunflower,) Cocoa, Liquid Sugar, Glucose-Fructose Syrup, Hydrogenated Vegetable Fat, Hazelnuts, Almonds, Cherry Soaked in Liqueur 24% vol, Marc de Champagne, Maple Syrup, Honey, Coconut, Raspberry 50% vol, Kirsch 60% vol, Batida de Coco 16% vol, Strawberry, Cranberry (Sugar, Cranberries, Citric Acid, Natural Orange Flavour with Other Natural Flavours, Elderberry Juice Concentrate, Sunflower Oil), Speculoos Herbs (Cinnamon, Nutmeg, Cloves, Pimento, Ginger, Coriander, Mace, Cardamom), Rum, Orange Liqueur, Concentrated Lemon Juice, Chilli Flavour, Lime Oil, Sugar Cane, Emulsifiers (Soya Lecithin, Sunflower Lecithin, E471), Low Fat Cocoa Powder, Lactose, Fructose, Dextrose, Sorbitol, Alcohol, Wheat Starch, Modified Starch, Natural Flavouring, Natural Vanilla Flavouring, Strawberry Flavour, Natural Orange Oil, Anhydrous Milk Fat, Butter Oil (Milk), Water, Salt, Pistachios, Acidity Regulator: E330, Glazing Agent: E901, Preservative: E202, Antioxidant: E223, Colours (E100, E160a, E120).   Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✔ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Signature Chocolate Box - 96 Chocolates        
Signature Chocolate Box - 96 Chocolates

Signature Chocolate Box - 96 Chocolates

Ingredients: Sugar, Full Cream Milk Powder, Cocoa Butter, Cocoa Mass, Whole Milk Powder, Sweetened Condensed Skimmed Milk, Evaporated Milk, Vegetable Fats (Palm, Coconut, Sunflower, Rapeseed, Cottonseed), Butter (Milk), Glucose Syrup, Invert Sugar Syrup, Milk Fat, Skimmed Milk Powder, Cream (Milk), Vegetable Oils (Coconut, Palm, Rapeseed, Palm Kernel, Sunflower,) Cocoa, Liquid Sugar, Glucose-Fructose Syrup, Hydrogenated Vegetable Fat, Hazelnuts, Almonds, Cherry Soaked in Liqueur 24% vol, Marc de Champagne, Maple Syrup, Honey, Coconut, Raspberry 50% vol, Kirsch 60% vol, Batida de Coco 16% vol, Strawberry, Cranberry (Sugar, Cranberries, Citric Acid, Natural Orange Flavour with Other Natural Flavours, Elderberry Juice Concentrate, Sunflower Oil), Speculoos Herbs (Cinnamon, Nutmeg, Cloves, Pimento, Ginger, Coriander, Mace, Cardamom), Rum, Orange Liqueur, Concentrated Lemon Juice, Chilli Flavour, Lime Oil, Sugar Cane, Emulsifiers (Soya Lecithin, Sunflower Lecithin, E471), Low Fat Cocoa Powder, Lactose, Fructose, Dextrose, Sorbitol, Alcohol, Wheat Starch, Modified Starch, Natural Flavouring, Natural Vanilla Flavouring, Strawberry Flavour, Natural Orange Oil, Anhydrous Milk Fat, Butter Oil (Milk), Water, Salt, Pistachios, Acidity Regulator: E330, Glazing Agent: E901, Preservative: E202, Antioxidant: E223, Colours (E100, E160a, E120).   Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✔ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✔ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          Small Blue Patterned Trinket Box        
Small Blue Patterned Trinket Box

Small Blue Patterned Trinket Box

Lindt Lindor Truffle Ingredients: Sugar, Vegetable Fats (Coconut, Palm Kernel), Cocoa Butter, Cocoa Mass, Whole Milk Powder, Skimmed Milk Powder, Lactose, Anhydrous Milk Fat, Emulsifier (Soya Lecithin), Barley Malt Extract, Flavourings, Vanilla, Milk Chocolate contains Cocoa Solids: 31% minimum, Milk Solids: 20% minimum Contains Milk, Soya and Gluten May Contain Nuts  


          Small Kitten Tin        
Small Kitten Tin

Small Kitten Tin

Lindt Lindor Truffles Ingredients: Sugar, Vegetable Fats (Coconut, Palm Kernel), Cocoa Butter, Cocoa Mass, Whole Milk Powder, Skimmed Milk Powder, Lactose, Anhydrous Milk Fat, Emulsifier (Soya Lecithin), Barley Malt Extract, Flavourings, Vanilla, Milk Chocolate contains Cocoa Solids: 31% minimum, Milk Solids: 20% minimum Contains Milk, Soya and Gluten May Contain Nuts  


          White Chocolate Covered Fudge        
White Chocolate Covered Fudge

White Chocolate Covered Fudge

Ingredients: White Sugar, Sweetened Condensed Milk, Glucose, White Chocolate (16%), Fondant (Sugar, Glucose, Water), Butter (4%), Hydrogenated Palm Kernel Oil, Vanilla Flavouring, Mono Sodium Glutamate, Salt. White Chocolate Contains: Cocoa Solids: 28% Minimum; Milk Solids: 25% Minimum. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✔ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✖ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✖ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✖ Contains Artificial Colours ✖ Contains Artificial Flavours ✖


          White Wine & Chocolate Gift Box        
White Wine & Chocolate Gift Box

White Wine & Chocolate Gift Box

Ingredients: Sugar, Hazelnuts, Cocoa Butter, Whole Milk Powder, Cocoa Mass, Glucose Syrup, Invert Sugar Syrup, Vegetable Oils and Fats: Palm, Palm Kernel, Coconut, Sunflower, Rapeseed; Skimmed Milk Powder, Dextrose, Milk Fat (Anhydrous), Low Fat Cocoa Powder, Water, Salt, Acidity Regulator: Citric Acid; Alcohol, Spirit Drink: Rum, Whiskey, Marc de Champagne, Amaretto; Emulsifier: E322 Soya Lecithin, E322 Sunflower Lecithin; Humectant: E420i Sorbitol; Egg, Flavourings, Colourings: Paprika Extract, E120 Cochineal, E160e; C1 Food Orange Preservative: E202 Potassium Sorbate. Allergen & Dietary Advice: Suitable for Vegans ✖ Suitable for Vegetarians ✖ Contains Gluten or Possible Traces ✖ Contains Milk, Milk Derivatives or Possible Traces ✔ Contains Eggs, Egg Derivatives or Possible Traces ✔ Contains Peanuts or Possible Traces ✔ Contains Other Nuts or Possible Traces ✔ Contains Soya or Possible Traces ✔ Contains Sulphur Dioxide or Sulphites ✖ Contains Mustard ✖ Contains Celery ✖ Contains Alcohol ✔ Contains Artificial Colours ✖ Contains Artificial Flavours ✔


          UNIX/Linux Advanced File Permissions - SUID,SGID and Sticky Bit        

After you have worked for a while with Linux you discover probably
that there is much more to file permissions than just the "rwx" bits.
When you look around in your file system you will see "s" and "t"

$ ls -ld /tmp
drwxrwxrwt 29 root root 36864 Mar 21 19:49 /tmp

$ which passwd
/usr/bin/passwd

$ ls -l /usr/bin/passwd
-rwsr-xr-x 1 root root 22984 Jan 6 2007 /usr/bin/passwd

What is this "s" and "t" bit? The vector of permission bits is really
4 * 3 bits long. Yes there are 12 permission bits,not just 9.The first
three bits are special and are frequently zero. And you almost always
learn about the trailing 9 bits first.Some people stop there and never
learn those first three bits.

The forth permission bit is used only when a special mode of a file
needs to be set. It has the value 4 for SUID, 2 for SGID and 1 for the
sticky bit. The other 3 bits have their usual significance.

Here we will discuss about the 3 special attributes other than the
common read/write/execute:

1.Set-User-Id (SUID)
2.Set-Group-Id (SGID)
3.Sticky Bit


Set-User_Id (SUID): Power for a Moment:


By default, when a user executes a file, the process which results in
this execution has the same permissions as those of the user. In fact,
the process inherits his default group and user identification.

If you set the SUID attribute on an executable file, the process res-
ulting in its execution doesn't use the user's identification but the
user identification of the file owner.

The SUID mechanism,invented by Dennis Ritchie,is a potential security
hazard. It lets a user acquire hidden powers by running such a file
owned by root.

$ ls -l /etc/passwd /etc/shadow /usr/bin/passwd
-rw-r--r-- 1 root root 2232 Mar 15 00:26 /etc/passwd
-r-------- 1 root root 1447 Mar 19 19:01 /etc/shadow

The listing shows that passwd is readable by all, but shadow is unre-
adable by group and others. When a user running the program belongs to
one of these two categories (probably, others), so access fails in the
read test on shadow. suppose normal user wants to change his password,
How can he do that? He can do that by running /usr/bin/passwd. Many
UNIX/Linux programs have a special permission mode that lets users
update sensitive system files –like /etc/shadow --something they can't
do directly with an editor. This is true of the passwd program.

$ ls -l /usr/bin/passwd
-rwsr-xr-x 1 root root 22984 Jan 6 2007 /usr/bin/passwd

The s letter in the user category of the permission field represents a
special mode known as the set-user-id (SUID). This mode lets a process
have the privileges of the owner of the file during the instance of
the program. Thus when a non privileged user executes passwd, the eff-
ective UID of the process is not the user's, but of root's – the owner
of the program. This SUID privilege is then used by passwd to edit
/etc/shadow.

What is effective user-id:

Every process really has two user IDs: the effective user ID and the
real user ID. (Of course, there's also an effective group ID and real
group ID.Just about everything that's true about user IDs is also true
about group IDs) Most of the time,the kernel checks only the effective
user ID. For example, if a process tries to open a file, the kernel
checks the effective user ID when deciding whether to let the process
access the file.

Save the following script under the name reids.pl and make it
executable (chmod 755 reids.pl).

#!/usr/bin/perl
# print real UID
print "Real UID: $<\n";
# print real GID
print "Real GID: $(\n";
# print effective UID
print "Effective UID: $>\n";
# print effective GID
print "Effective GID: $)\n";

check file permissions:

$ ls -l reids.pl
-rwxr-xr-x 1 venu venu 203 Mar 24 10:40 reids.pl

Note: For security reasons the s-bit works only when used on binaries
(compiled code) and not on scripts (an exception are perl scripts).
Scripts,i.e. programs that cannot be executed by the kernel directory
but need an interpreter such as the Bourne shell or Java,can have
their setuid bit set, but it doesn't have any effect. There are some
platforms that honor the s bits even on scripts ( some System V vari-
ants, for example), but most systems don't because it has proven such
a security headache - most interpreters simply aren't written with
much security in mind. Set the SUID bit on shell script is useless,
that's why I am using perl script here.

When you run the script you will see that the process that runs it
gets your user-ID and your group-ID:

$ ./reids.pl
Real UID: 500
Real GID: 500 500
Effective UID: 500
Effective GID: 500 500

Note: If you get an error like this:
Can't do setuid (cannot exec sperl)

In Debian install perl-suid using following command:
apt-get install perl-suid

In Centos install perl-suidperl using following command:
yum install perl-suidperl

Now change owner ship to another user (Do it as an administrator).

# chown king /home/venu/reids.pl
# ls -l /home/venu/reids.pl
-rwxr-xr-x 1 king venu 203 Mar 24 10:40 /home/venu/reids.pl

Now run the script again.

$ ./reids.pl
Real UID: 500
Real GID: 500 500
Effective UID: 500
Effective GID: 500 500

What you observed, the output of the program depends only on the user
that runs it and not the one who owns the file.

How to assign SUID permission:

The SUID for any file can be set (mostly by the superuser) with a
special syntax of the chmod command. This syntax uses the character s
as the permission. Now add SUID permission to the script reids.pl :

# chmod u+s /home/venu/reids.pl (Do it from root account)

Now return from the super user mode to the usual non privileged mode.

$ ls -l reids.pl
-rwsr-xr-x 1 king venu 203 Mar 24 10:40 reids.pl

To assign SUID in an absolute manner, simply prefix 4 to whatever
octal string you would otherwise use (like 4755 instead of 755).

The file reids.pl is owned by king and has the s-bit set where norma-
lly the x is for the owner of the file. This causes the file to be
executed under the user-ID of the user that owns the file rather than
the user that executes the file. If venu runs the program then this
looks as follows:

$ perl reids.pl
Real UID: 500
Real GID: 500 500
Effective UID: 503
Effective GID: 500 500

Effective user id of process is 503, this is not the venu's , but of
king's - the owner of the program. As you can see this is a very powe-
rful feature especially if root owns the file with s-bit set. Any user
can then do things that normally only root can do.

Caution: When you write a SUID program then you must make sure that
it can only be used for the purpose that you intended it to be used.
As administrator, you must keep track of all SUID programs owned by
root that a user may try to create or copy. The find command easily
locate them:

# find /home -perm -4000 -print | mail root

The extra octal bit (4) signifies the SUID mode, but find treats the
"–" before 4000 as representing any other permissions.

Set-Group_Id (SGID):


The set-group-id (SGID) is similar to SUID except that a program with
SGID set allows the user to have the same power as the group which
owns the program. The SGID bit is 2,and some typical examples could be
chmod g+s reids.pl or chmod 2755 reids.pl.
You can remove SGID bit using following commands:

$ chmod g-s reids.pl
$ chmod 755 reids.pl (Absolute manner)


It is really useful in case you have a real multi-user setup where
users access each others files. As a single homeuser I haven't really
found a lot of use for SGID. But the basic concept is the same as the
SUID,Similar to SUID, SGID also grants privileges and access rights to
the process running the command, but instead of receiving those of the
file's owner it receives those of the file's group. In other words,the
process group owner will be set to the file's group.

I explain it with an example. I have created two user accounts king
and venu with same home directory project. king belongs to king and
development groups, venu belongs to venu and development groups.

# groups king venu
king : king development
venu : venu development

venu's default group is venu and king's default group is king.

Login as king and create reids.pl file again and make it executable
(using chmod 755 reids.pl) .

$ id
uid=503(king) gid=503(king) groups=501(development),503(king)
$ ls -l reids.pl
-rwxr-xr-x 1 king development 203 Mar 25 19:00 reids.pl

Now login as venu and run the program:

$ id
uid=501(venu) gid=504(venu) groups=501(development),504(venu)
$ perl reids.pl
Real UID: 501
Real GID: 504 504 501
Effective UID: 501
Effective GID: 504 504 501

The effective GID of the process is the venu's,but not of the king's
-the owner of the program.

Now login as king and assign SGID bit to reids.pl program:

$ chmod 2755 reids.pl; ls -l reids.pl
-rwxr-sr-x 1 king development 203 Mar 25 19:00 reids.pl

Now login as venu and run the reids.pl program:

$ perl reids.pl
Real UID: 501
Real GID: 504 504 501
Effective UID: 501
Effective GID: 501 504 501

Real GID and Effective GID are different,here Effective GID is the
king's - the owner of the program.

Set SGID on a directory:

When SGID is set on a directory it has a special meaning. Files crea-
ted in a directory with SGID set will inherit the same group ownership
as the directory itself,not the group of the user who created the file.
If the SGID is not set the file's group ownership corresponds to the
user's default group.

In order to set the SGID on a directory or to remove it, use the
following commands:

$ chmod g+s directory or $ chmod 2755 directory
$ chmod g-s directory or $ chmod 755 directory

As I mentioned earlier venu and king's home directory is same that is
/home/project. I changed group ownership of /home/project directory
to development.

# ls -ld /home/project/
drwxrwxr-x 16 root development 4096 Mar 26 00:22 /home/project/

Now login as king and create a temp file.

$ whoami
king
$ pwd
/home/project/
$ touch temp; ls -l temp
-rw-r--r-- 1 king king 0 Mar 26 12:34 temp

You can see from the ls output that the group owner for project is
development, and that the SGID bit has not been set on the directory
yet. When king creates a file in project, the group for the file is
king (king's primary gid).

Set SGID bit on project directory. For that login as administrator
and set SGID bit using following command:

# chmod g+s /home/project/
# ls -ld /home/project/
drwxrwsr-x 15 root development 4096 Mar 26 12:34 /home/project/

From the ls output above, you know the SGID bit is set because of the
s in the third position of the group permission set,which replaces the
x in the group permissions.

Now login as king and create temp2 file.

$ whoami
king
$ touch temp2; ls -l temp2
-rw-r--r-- 1 king development 0 Mar 26 13:49 temp2

Notice the group ownership for temp2 file. It inherits group permiss-
ion from the parent directory.

Enabling SGID on a directory is extremely useful when you have a
group of users with different primary groups working on the same set
of files.

For system security reasons it is not a good idea to set many
program's set user or group ID bits any more than necessary,since this
can allow an unauthorized user privileges in sensitive system areas.If
the program has a flaw that allows the user to break out of the inten-
ded use of the program, then the system can be compromised.

Sticky bit:


The sticky bit(also called the saved text bit) is the last permission
bit remaining to be discussed. It applies to both regular files and
directories. When applied to a regular file, it ensures that the text
image of a program with the bit set is permanently kept in the swap
area so that it can be reloaded quickly when the program's turn to use
the CPU arrives. Previously, it made sense to have this bit set for
programs like vi and emacs. Today,machines with ultra-fast disk drives
and lots of cheap memory don't need this bit for ordinary files and
that is also useless.

However, the sticky bit become a useful security feature when used
with a directory. The UNIX/Linux system allows users to create files
in /tmp, but none can delete files not owned by him. That's possible
because sticky bit set for /tmp directory.

The /tmp directory is typically world-writable and looks like this
in a listing:

# ls -ld /tmp
drwxrwxrwt 32 root root 36864 Mar 27 12:38 /tmp

Everyone can read,write and access the directory.The t indicates that
only the user (root and owner of the directory,of course) that created
a file in this directory can delete that file.

In order to set or to remove the sticky bit, use the following
commands:

$ chmod +t directory or $ chmod 1754 directory
$ chmod -t directory or $ chmod 754 directory

Note: 754 permissions for a directory are powerful enough to guard
your directories from intruders with malicious intentions, that's why
I used 754 as default,if yow want you can change it.

Example:

I logged in as king and created a temp file.

$ whoami
king
$ pwd
/home/project/
$ touch temp; ls -l
-rw-r--r-- 1 king king 0 Mar 27 13:44 temp

Now logged in as venu and try to delete temp file.

$ whoami
venu
$ rm temp
rm: remove write-protected regular empty file `temp'? Y
$ ls temp
ls: temp: No such file or directory

So what happened? venu deleted file owned by king.

Assign sticky bit to the project directory.As a owner of the directory
or administrator.

# chmod +t /home/project
# ls -ld /home/project/
drwxrwxr-t 15 root development 4096 Mar 27 13:46 /home/project/

From the ls output above, you know the sticky bit is set because of
the t in the third position of the other permission set,which replaces
the x in the other permissions.

Now repeat same steps again,then you get the following message:

$ whoami
venu
$ ls -l temp
-rw-r--r-- 1 king king 0 Mar 27 17:36 temp
$ rm temp
rm: remove write-protected regular empty file `temp'? y
rm: cannot remove `temp': Operation not permitted


Observation: Login as normal user and create a file.
[venu@localhost ~]$ touch sample
[venu@localhost ~]$ ls -l sample
-rw-rw-r-- 1 venu venu 0 Dec 21 03:41 sample

Now change permissions to 644

[venu@localhost ~]$ chmod 644 sample
[venu@localhost ~]$ ls -l sample
-rw-r--r-- 1 venu venu 0 Dec 21 03:41 sample

Now assign SUID permission.

[venu@localhost ~]$ chmod u+s sample
[venu@localhost ~]$ ls -l sample
-rwSr--r-- 1 venu venu 0 Dec 21 03:41 sample

After setting SUID, if you see 'S' then it means that the file has no
executable permissions for that user.

Now remove SUID permission and change permissions to 744. Then assign
SUID permission. You should see a smaller 's' in the executable permi-
ssion position.

[venu@localhost ~]$ chmod u-s sample
[venu@localhost ~]$ chmod 744 sample
[venu@localhost ~]$ chmod u+s sample
[venu@localhost ~]$ ls -l sample
-rwsr--r-- 1 venu venu 0 Dec 21 03:41 sample

Same is applicable for SGID and Stickybit.



          kdump-tools enhancements to use smaller initrd.img        
While testing the upcoming release of Ubuntu (15.10 Wily Warewolf), I ran over a bug that renders the kernel crash dump mechanism unusable by default : LP: #1496317 : kexec fails with OOM killer with the current crashkernel=128 value The … Continuer la lecture
          Capturing kernel crash dumps with Juju        
A little before the summer vacation, I decided that it was a good time to get acquainted with writing Juju charms.  Since I am heavily involved with the kernel crash dump tools, I thought that it would be a good … Continuer la lecture
          remote kernel crash dump : More testing needed        
A couple of weeks ago I announced that I was working on a new remote functionality for kdump-tools, the kernel crash dump tool used on Debian and Ubuntu. I am now done with the development of the new functionality, so … Continuer la lecture
          remote kernel crash dump for Debian and Ubuntu        
A few years ago, while I started to participate to the packaging of makedumpfile and kdump-tools for Debian and ubuntu. I am currently applying for the formal status of Debian Maintainer to continue that task. For a while now, I … Continuer la lecture
          Tracking down a kernel bug with git bisect        

After a recent upgrade of my Fedora 20 system to kernel 3.15.mumble, I started running into a problem (BZ 1121345) with my Docker containers. Operations such as su or runuser would fail with the singularly unhelpful System error message:

$ docker run -ti fedora /bin/bash
bash-4.2# su …

          A Python interface to signalfd() using FFI        

I just recently learned about the signalfd(2) system call, which was introduced to the Linux kernel back in 2007:

signalfd() creates a file descriptor that can be used to accept signals targeted at the caller. This provides an alternative to the use of a signal handler or sigwaitinfo(2), and has the advantage that the file descriptor may be monitored by select(2), poll(2), and epoll(7).

The traditional asynchronous delivery mechanism can be tricky to get right, whereas this provides a convenient fd interface that integrates nicely with your existing event-based code.

I was interested in using signalfd() in some Python code, but Python does not expose this system call through any of the standard libraries. There are a variety of ways one could add support, including:

  • Writing a Python module in C
  • Using the ctypes module (which I played with a few years ago)

However, I decided to use this as an excuse to learn about the cffi module. You can find the complete code in my python-signalfd repository and an explanation of the process below.


          Pushing a Git repository to Subversion        

I recently set up a git repository server (using gitosis and gitweb). Among the required features of the system was the ability to publish the git repository to a read-only Subversion repository. This sounds simple in principle but in practice proved to be a bit tricky.

Git makes an excellent …


          A Docker ‘Hello World' With Mono        

Docker is a lightweight virtualization technology for Linux that promises to revolutionize the deployment and management of distributed applications. Rather than requiring a complete operating system, like a traditional virtual machine, Docker is built on top of Linux containers, a feature of the Linux kernel, that allows light-weight Docker containers to share a common kernel while isolating applications and their dependencies.

There’s a very good Docker SlideShare presentation here that explains the philosophy behind Docker using the analogy of standardized shipping containers. Interesting that the standard shipping container has done more to create our global economy than all the free-trade treaties and international agreements put together.

A Docker image is built from a script, called a ‘Dockerfile’. Each Dockerfile starts by declaring a parent image. This is very cool, because it means that you can build up your infrastructure from a layer of images, starting with general, platform images and then layering successively more application specific images on top. I’m going to demonstrate this by first building an image that provides a Mono development environment, and then creating a simple ‘Hello World’ console application image that runs on top of it.

Because the Dockerfiles are simple text files, you can keep them under source control and version your environment and dependencies alongside the actual source code of your software. This is a game changer for the deployment and management of distributed systems. Imagine developing an upgrade to your software that includes new versions of its dependencies, including pieces that we’ve traditionally considered the realm of the environment, and not something that you would normally put in your source repository, like the Mono version that the software runs on for example. You can script all these changes in your Dockerfile, test the new container on your local machine, then simply move the image to test and then production. The possibilities for vastly simplified deployment workflows are obvious.

Docker brings concerns that were previously the responsibility of an organization’s operations department and makes them a first class part of the software development lifecycle. Now your infrastructure can be maintained as source code, built as part of your CI cycle and continuously deployed, just like the software that runs inside it.

Docker also provides docker index, an online repository of docker images.  Anyone can create an image and add it to the index and there are already images for almost any piece of infrastructure you can imagine. Say you want to use RabbitMQ, all you have to do is grab a handy RabbitMQ images such as https://index.docker.io/u/tutum/rabbitmq/ and run it like this:

docker run -d -p 5672:5672 -p 55672:55672 tutum/rabbitmq

The –p flag maps ports between the image and the host.

Let’s look at an example. I’m going to show you how to create a docker image for the Mono development environment and have it built and hosted on the docker index. Then I’m going to build a local docker image for a simple ‘hello world’ console application that I can run on my Ubuntu box.

First we need to create a Docker file for our Mono environment. I’m going to use the Mono debian packages from directhex. These are maintained by the official Debian/Ubuntu Mono team and are the recommended way of installing the latest Mono versions on Ubuntu.

Here’s the Dockerfile:

#DOCKER-VERSION 0.9.1
#
#VERSION 0.1
#
# monoxide mono-devel package on Ubuntu 13.10

FROM ubuntu:13.10
MAINTAINER Mike Hadlow <mike@suteki.co.uk>

RUN sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q software-properties-common
RUN sudo add-apt-repository ppa:directhex/monoxide -y
RUN sudo apt-get update
RUN sudo DEBIAN_FRONTEND=noninteractive apt-get install -y -q mono-devel

Notice the first line (after the comments) that reads, ‘FROM  ubuntu:13.10’. This specifies the parent image for this Dockerfile. This is the official docker Ubuntu image from the index. When I build this Dockerfile, that image will be automatically downloaded and used as the starting point for my image.

But I don’t want to build this image locally. Docker provide a build server linked to the docker index. All you have to do is create a public GitHub repository containing your dockerfile, then link the repository to your profile on docker index. You can read the documentation for the details.

The GitHub repository for my Mono image is at https://github.com/mikehadlow/ubuntu-monoxide-mono-devel. Notice how the Docker file is in the root of the repository. That’s the default location, but you can have multiple files in sub-directories if you want to support many images from a single repository.

Now any time I push a change of my Dockerfile to GitHub, the docker build system will automatically build the image and update the docker index. You can see image listed here: https://index.docker.io/u/mikehadlow/ubuntu-monoxide-mono-devel/

I can now grab my image and run it interactively like this:

$ sudo docker pull mikehadlow/ubuntu-monoxide-mono-devel
Pulling repository mikehadlow/ubuntu-monoxide-mono-devel
f259e029fcdd: Download complete
511136ea3c5a: Download complete
1c7f181e78b9: Download complete
9f676bd305a4: Download complete
ce647670fde1: Download complete
d6c54574173f: Download complete
6bcad8583de3: Download complete
e82d34a742ff: Download complete

$ sudo docker run -i mikehadlow/ubuntu-monoxide-mono-devel /bin/bash
mono --version
Mono JIT compiler version 3.2.8 (Debian 3.2.8+dfsg-1~pre1)
Copyright (C) 2002-2014 Novell, Inc, Xamarin Inc and Contributors. www.mono-project.com
TLS: __thread
SIGSEGV: altstack
Notifications: epoll
Architecture: amd64
Disabled: none
Misc: softdebug
LLVM: supported, not enabled.
GC: sgen
exit

Next let’s create a new local Dockerfile that compiles a simple ‘hello world’ program, and then runs it when we run the image. You can follow along with these steps. All you need is a Ubuntu machine with Docker installed.

First here’s our ‘hello world’, save this code in a file named hello.cs:

using System;

namespace Mike.MonoTest
{
public class Program
{
public static void Main()
{
Console.WriteLine("Hello World");
}
}
}

Next we’ll create our Dockerfile. Copy this code into a file called ‘Dockerfile’:

#DOCKER-VERSION 0.9.1

FROM mikehadlow/ubuntu-monoxide-mono-devel

ADD . /src

RUN mcs /src/hello.cs
CMD ["mono", "/src/hello.exe"]

Once again, notice the ‘FROM’ line. This time we’re telling Docker to start with our mono image. The next line ‘ADD . /src’, tells Docker to copy the contents of the current directory (the one containing our Dockerfile) into a root directory named ‘src’ in the container. Now our hello.cs file is at /src/hello.cs in the container, so we can compile it with the mono C# compiler, mcs, which is the line ‘RUN mcs /src/hello.cs’. Now we will have the executable, hello.exe, in the src directory. The line ‘CMD [“mono”, “/src/hello.exe”]’ tells Docker what we want to happen when the container is run: just execute our hello.exe program.

As an aside, this exercise highlights some questions around what best practice should be with Docker. We could have done this in several different ways. Should we build our software independently of the Docker build in some CI environment, or does it make sense to do it this way, with the Docker build as a step in our CI process? Do we want to rebuild our container for every commit to our software, or do we want the running container to pull the latest from our build output? Initially I’m quite attracted to the idea of building the image as part of the CI but I expect that we’ll have to wait a while for best practice to evolve.

Anyway, for now let’s manually build our image:

$ sudo docker build -t hello .
Uploading context 1.684 MB
Uploading context
Step 0 : FROM mikehadlow/ubuntu-monoxide-mono-devel
---> f259e029fcdd
Step 1 : ADD . /src
---> 6075dee41003
Step 2 : RUN mcs /src/hello.cs
---> Running in 60a3582ab6a3
---> 0e102c1e4f26
Step 3 : CMD ["mono", "/src/hello.exe"]
---> Running in 3f75e540219a
---> 1150949428b2
Successfully built 1150949428b2
Removing intermediate container 88d2d28f12ab
Removing intermediate container 60a3582ab6a3
Removing intermediate container 3f75e540219a

You can see Docker executing each build step in turn and storing the intermediate result until the final image is created. Because we used the tag (-t) option and named our image ‘hello’, we can see it when we list all the docker images:

$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
hello latest 1150949428b2 10 seconds ago 396.4 MB
mikehadlow/ubuntu-monoxide-mono-devel latest f259e029fcdd 24 hours ago 394.7 MB
ubuntu 13.10 9f676bd305a4 8 weeks ago 178 MB
ubuntu saucy 9f676bd305a4 8 weeks ago 178 MB
...

Now let’s run our image. The first time we do this Docker will create a container and run it. Each subsequent run will reuse that container:

$ sudo docker run hello
Hello World

And that’s it.

Imagine that instead of our little hello.exe, this image contained our web application, or maybe a service in some distributed software. In order to deploy it, we’d simply ask Docker to run it on any server we like; development, test, production, or on many servers in a web farm. This is an incredibly powerful way of doing consistent repeatable deployments.

To reiterate, I think Docker is a game changer for large server side software. It’s one of the most exciting developments to have emerged this year and definitely worth your time to check out.


          Acustica Audio Nebula2 and Nebula3 promotional prices        
Acustica Audio is offering Nebula2 and Nebula3 for a special price. Nebula is a VST Plugin based on Volterra Kernels Series. It emulates different types of vintage gear: equalisers, filters, microphones, preamps, compressors, reverb and generic time-variant processors (chorus, flangers, phasers). Acustica Audio Nebula 3 Nebula2 + over 5 Gigabytes of libraries is 20 euro, […]
          today's leftovers        

          today's leftovers        

          today's leftovers        
  • Linux Weather Forecast

    This page is an attempt to track ongoing developments in the Linux development community that have a good chance of appearing in a mainline kernel and/or major distributions sometime in the near future. Your "chief meteorologist" is Jonathan Corbet, Executive Editor at LWN.net. If you have suggestions on improving the forecast (and particularly if you have a project or patchset that you think should be tracked), please add your comments below.

  • Linux guru Linus Torvalds is reviewing gadgets on Google+

    Now it appears the godfather of Linux has started to put all that bile to good use by reviewing products on Google+.

  • Learning to love Ansible

    I’ve been convinced about the merits of configuration management for machines for a while now; I remember conversations about producing an appropriate set of recipes to reproduce our haphazard development environment reliably over 4 years ago. That never really got dealt with before I left, and as managing systems hasn’t been part of my day job since then I never got around to doing more than working my way through the Puppet Learning VM. I do, however, continue to run a number of different Linux machines - a few VMs, a hosted dedicated server and a few physical machines at home and my parents’. In particular I have a VM which handles my parents’ email, and I thought that was a good candidate for trying to properly manage. It’s backed up, but it would be nice to be able to redeploy that setup easily if I wanted to move provider, or do hosting for other domains in their own VMs.

  • GSoC: Improvements in kiskadee architecture

    Today I have released kiskadee 0.2.2. This minor release brings some architecture improvements, fix some bugs in the plugins and improve the log messages format. Initially, lets take a look in the kiskadee architecture implemented on the 0.2 release.

  • How UndoDB works

    In the previous post I described what UndoDB is, now I will describe how the technology works.

    The naïve approach to record the execution of a program is to record everything that happens, that is the effects of every single machine instruction. This is what gdb does to offer reversible debugging.

  • Wild West RPG West of Loathing Launches for PC/Mac/Linux on August 10th

    Today, developer Asymmetric announced that its comedy, wild west RPG, West of Loathing, is poised to launch for PC, Mac, and Linux on August 10th.

  • Canonical asks users' help in deciding Ubuntu Linux desktop apps

    Canonical Ubuntu Linux has long been one of the most popular Linux desktop distributions. Now, its leadership is looking to its users for help to decide the default desktop applications in the next long-term support version of the operating system: Ubuntu 18.04.

    This release, scheduled for April 2018, follows October's Ubuntu 17.10, Artful Aardvark. Ubuntu 18.04 will already include several major changes. The biggest of these is Ubuntu is abandoning its Unity 8 interface to go back to the GNOME 3.x desktop.

  • Enhanced Open Source Framework Available for Parallel Programming on Embedded Multicore Devices
  • Studiolada used all wood materials to create this affordable open-source home anyone can build

    Using wood panels as the principal building material reduced the project’s overall cost and footprint because the wooden beams and wall panels were cut and varnished in a nearby workshop. Prefabricated concrete was used to embed the support beams, which were then clad in wooden panels. In fact, wood covers just about everything in the home, from the walls and flooring to the ceiling and partitions. Sustainable materials such as cellulose wadding and wood fibers were even used to insulate the home.


          Using OpenVPN connection to play games while abroad using Stream's In-Home Streaming        

Introduction

Steam has a great (albeit, a little glitchy) feature called In-Home Streaming that allows you to stream games from running Steam clients on your local network, effectively turning your gaming PC into a little render farm and allowing you to play from low-power devices like a laptop.

With the help of OpenVPN, it's possible to enable playback of games from your home PC seamlessly while away from your home network too, provided you have a decent Internet connection. This tutorial will demonstrate how to setup an OpenVPN server on Fedora 23 with a bridged network connection in let you VPN into your home network and stream Steam games from PCs on your LAN.

To make this work, we need to use OpenVPN in bridged mode with a tap network device. Bridging the ethernet and tap interfaces will allow VPN clients to receive an IP address on the LAN's subnet.

The default (and simpler) tun devices are not bridged, and function on a separate subnet - something which will break in-home streaming. We need to be on the same LAN so that the UDP broadcast packets sent by Steam for auto-discovery will be received by the VPN clients.

Creating a network bridge

Let's start by setting up the network bridge with NetworkManager and enslaving the ethernet interface. Check the name of your active network interface by running nmcli d, and replace the value ofETH_IFACE with that name below:

ETH_IFACE=enp3s0
nmcli con add type bridge ifname br0
nmcli c modify bridge-br0 bridge.stp no
nmcli con add type bridge-slave ifname $ETH_IFACE master bridge-br0
nmcli c up "bridge-slave-${ETH_IFACE}"
nmcli c up bridge-br0

Installing the OpenVPN server

Next, in order to run an OpenVPN server, one needs to set up a certificate authority (CA) signs client certificates and authorizes them for login. In our case, we'll be using password authentication (for convenience) -- but OpenVPN still wants a CA setup and the server's certificate signed. Let's set up the CA for the OpenVPN server:

dnf install easy-rsa
cp -a /usr/share/easy-rsa/3 /root/openvpn-bridged-rsa
cd /root/openvpn-bridged-rsa
./easyrsa init-pki
./easyrsa build-ca
# Enter the CA password, then accept the defaults

Now we need to create and sign the certificate for the server (set the value of SERVER_ALIAS to an alias of your choice):

SERVER_ALIAS=homelab
./easyrsa gen-dh
./easyrsa gen-req $SERVER_ALIAS nopass
./easyrsa sign-req server $SERVER_ALIAS
# Enter 'yes', then CA password

Finally, we copy the keys and certificates to a dedicated folder for OpenVPN:

mkdir /etc/openvpn/keys
chmod 700 /etc/openvpn/keys
cp pki/ca.crt pki/dh.pem "pki/issued/${SERVER_ALIAS}.crt" "pki/private/${SERVER_ALIAS}.key" /etc/openvpn/keys

We are now ready to configure OpenVPN. Set the variables based on your LAN's configuration (see ifconfig $ETH_IFACE output if unsure):

BRIDGE_IP=192.168.1.1
NETMASK=255.255.255.0
IP_POOL_START=192.168.1.241
IP_POOL_END=192.168.1.254

dnf install openvpn
firewall-cmd --permanent --add-service openvpn
firewall-cmd --reload

cat << EOF > /etc/openvpn/bridged.conf
port 1194
dev tap0
tls-server
ca /etc/openvpn/keys/ca.crt
cert /etc/openvpn/keys/$SERVER_ALIAS.crt
key /etc/openvpn/keys/$SERVER_ALIAS.key # This file should be kept secret
dh /etc/openvpn/keys/dh.pem
server-bridge $BRIDGE_IP $NETMASK $IP_POOL_START $IP_POOL_END

# Password authentication
client-cert-not-required
username-as-common-name
plugin /usr/lib64/openvpn/plugins/openvpn-plugin-auth-pam.so openvpn

# Allow multiple client connections from the same user
duplicate-cn

# Client should attempt reconnection on link failure.
keepalive 10 120

# The server doesn't need root privileges
user openvpn
group openvpn

# Logging levels & prevent repeated messages
verb 4
mute 20
log-append /var/log/openvpn.log
status /var/log/openvpn-status.log

# Set some other options
comp-lzo
persist-key
persist-tun
push persist-key
push persist-tun

# Brings up tap0 since NetworkManager won't do it automatically (yet?)
script-security 2
up up.sh
EOF

OpenVPN will create the tap0 interface automatically when the OpenVPN server starts. NetworkManager is able to enslave the interface to the bridge, but won't bring tap0 online. For that, we install a simple script:

cat << EOF > /etc/openvpn/up.sh
#!/bin/bash
br=br0
dev=\$1
mtu=\$2
link_mtu=\$3
local_ip=\$4
local_netmask=\$5

# This should be done by NetworkManager... but it can't hurt.
/sbin/brctl addif \$br \$dev

# NetworkManager appears to be capable of enslaving tap0 to the bridge automatically, but won't bring up the interface.
/sbin/ifconfig \$dev 0.0.0.0 promisc up
EOF
chmod +x /etc/openvpn/up.sh

Next, we need to create the PAM authentication configuration file for the OpenVPN password plugin:

cat << EOF > /etc/pam.d/openvpn
#%PAM-1.0
auth       substack     system-auth
auth       include      postlogin
auth       requisite    pam_succeed_if.so user ingroup openvpn_pw quiet
account    required     pam_nologin.so
account    include      system-auth
password   include      system-auth
EOF

This configuration file requires that the users logging in be a member of the openvpn_pw group. You can adjust the file as you see fit.

Finally open the OpenVPN port in the firewall and start the service:

firewall-cmd --permanent --add-service openvpn
firewall-cmd --reload
systemctl enable openvpn@bridged
systemctl start openvpn@bridged

Configure the firewall

By default, packets are filtered through iptables which can cause issues, as packets won't freely through between the interfaces. We can disable that behavior:

cat << EOF > /etc/modules-load.d/bridge.conf
br_netfilter
EOF

cat << EOF > /etc/sysctl.d/bridge.conf
net.bridge.bridge-nf-call-ip6tables=0
net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-arptables=0
EOF
sysctl -p /etc/sysctl.d/bridge.conf

cat << EOF > /etc/udev/rules.d/99-bridge.rules
ACTION=="add", SUBSYSTEM=="module", KERNEL=="br_netfilter", RUN+="/sbin/sysctl -p /etc/sysctl.d/bridge.conf"
EOF

Note that I assume that net.ipv4.ip_forward=1 (having libvirt seems to configure this automatically). If not, you'll want to tune the sysctl parameter net.ipv4.ip_forward to a value of 1.

OpenVPN client configuration

That's it! Send a copy of /etc/openvpn/keys/ca.crt on the server to your clients, and you should now be able to connect to your OpenVPN server using this very simple client configuration (don't forget to replace your.server.fqdn with your server's IP address or FQDN):

client
dev tap
proto udp
remote your.server.fqdn 1194
resolv-retry infinite
nobind
persist-key
persist-tun
ca ca.crt
auth-user-pass
comp-lzo
verb 3
mute 20

Once connected, you should be able to ping any machine on the LAN as well as fire up Stream for a remote gaming session.

Appendix A: Steam on OS X

Small note, if you're running the Steam client on OS X there's a bug where the client only sends its UDP broadcast packets for in-home streaming discovery on the machine's primary (i.e. Ethernet or Wi-FI) interface. This nifty command captures those and re-broadcasts them over the VPN's interface (once again, substitute the value of BROADCAST_ADDR per your LAN settings):

BROADCAST_ADDR=192.168.1.255
sudo tshark -T fields -e data -l 'udp and dst port 27036' | script -q /dev/null xxd -r -p | socat - UDP-DATAGRAM:${BROADCAST_ADDR}:27036,broadcast

Special thanks to Larry Land for pointing that out in his blog post Run your own high-end cloud gaming service on EC2 (which is awesome and deserves a read, by the way).

The above command requires the Wireshark and socat utilities to be installed, which you can grab using homebrew:

brew install wireshark socat

and if you don't know your subnet's broadcast address, verify it with:

ifconfig tap0 | grep broadcast

Appendix B: Troubleshooting tips

Whenever possible, I like to use the most modern tooling available. This tends to bite me because documentation might not be as good or the feature set in the replacement tools might be lacking compared to the older tried and true tooling, but I try to always look forward. 'new' tooling like systemd, NetworkManager and firewalld might be rough around the edges, but I like modern feature set and consistency they bring. Most importantly, using them (instead of dropping various custom shell scripts here and there) feels a lot less like my server is held together with glue, which I like.

While trying different configurations, I discovered a few tricks or debugging commands that proved very useful to me - particularly while migrating commands from online resources intended for the older tooling to the tools mentioned above. Hopefully, you'll find them useful too!

It's always the firewall

The blame for most of the issues you will experience generally fall under firewall or routing issues.

First steps in testing should always be disabling the firewall (systemctl stop firewalld) and if that doesn't fix it, then move to checking the routes (route -n or netstat -rn).
If you've identified the firewall is to blame, re-enable it and identifying the root cause by adjusting your configuration while listening for packets to see when packets start flowing again.

Listening for packets

This will be your most used tool. If things don't go as expected, listen on each of the tap0 (client), tap0 (server) and br0 (server) interfaces and then generate some traffic to see how far the packets:

tcpdump -i IFNAME PATTERN

where PATTERN can select for hosts, ports or traffic type. In this case, a particular favorite of mine was icmp or udp port 27036 as this let me test by trying to ping a machine on the LAN from the VPN client, as well as see if the Steam UDP traffic was making it in/out.

Send UDP traffic

The iperf utility can be used to test if UDP traffic makes it to the OpenVPN server (run iperf -c server.ip -u -T 32 -t 3 -i 1 -p 27036) from the OpenVPN clients/LAN machines (run iperf -s -u -i 1 -p 27036).

Changing the zone of an interface

firewalld has different 'zones', each with different rules (see firewall-cmd --list-all-zones). Interfaces will have the rules from the default zone applied to them unless otherwise configured, which you can do as follows:

firewall-cmd --permanent --change-interface=IFNAME --zone=NEW_ZONE
firewall-cmd --reload

Adding raw IPTables rules to firewalld

e.g. to accept all packet forwarding on the bridge interface:

firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 0 -i br0 -j ACCEPT

or to allow all packets flowing over br0 and tap0:

firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -i tap0 -j ACCEPT
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -i br0 -j ACCEPT

Neither of these commands should be necessary given the netfilter sysctl parameters tweaked earlier, but certain OpenVPN options (such as client2client) change the packet flow and cause packets to flow over the interface, get filters, then re-injected which could cause them to suddenly be affected by the iptables rules.

Recall that firewall-cmd --reload needs to be called before the permanent rules will take effect.

Debugging IPTables

The default iptables -L listing isn't very helpful when inserting/deleting rules by their chain offset. The following command lists all rules, numerically, and displays line numbers:

iptables --line-numbers -L -n -v

You can also log dropped packets for further troubleshooting (here limited to 1/s, inserted at position 15 which was the position before the DROP rules on my machine):

iptables -I INPUT 15 -j LOG -m limit --limit 60/min --log-prefix "iptables dropped: " --log-level 4

Configuring a client VPN connection using only NetworkManager

VPN_IFNAME=homelab
nmcli c add type vpn ifname $VPN_IFNAME vpn-type openvpn
nmcli c modify $VPN_IFNAME
set vpn.data dev = tap
set vpn.data ca = /path/to/copy/of/ca.crt
set vpn.data connection-type = password
set vpn.data remote = your.server.fqdn
set vpn.data comp-lzo = yes
set vpn.data username = your_user
set vpn.data connection-type = password
set vpn.secrets password = your_pw

'waiting for password' when connecting using Tunnelblock

When I was attempting to test my VPN connections using Tunnelblick on OS X, I experienced an annoying bug: When trying to connect, Tunnelblick would enter a 'Waiting for password' state, but never 'get' the password nor prompt for one. Log were misleading:

Tunnelblick: Obtained VPN username and password from the Keychain

No VPN password was stored in my keychain (verified using Keychain Access.app). Fortunately, this post on the Sophos community forums correctly identified the issue as a bug after having copied/renamed a Tunnelblick connection.

Tunnelblick's preference file needs to be adjusted in order to correctly prompt for a password again:

# from https://community.sophos.com/products/xg-firewall/f/124/t/75819
conname="homelab"
defaults delete net.tunnelblick.tunnelblick "${conname}-keychainHasPrivateKey"
defaults delete net.tunnelblick.tunnelblick "${conname}-keychainHasUsername"
defaults delete net.tunnelblick.tunnelblick "${conname}-keychainHasUsernameAndPassword"

Additional reading


          VMware: A PowerCLI module to massively backup/restore ESXi hosts configurations        
Flexibility is one of the greatest advantages of ESXi. Almost every aspect can be customized and tuned using both basic and advanced configurations in order to achieve a custom tailored system.

Configuring settings is a time consuming process, networking configurations for virtual standard switches, iSCSI vmkernel, port binding, NTP configuration, etc.

In case of host reinstall you can save precious time by using a great PowerCLI cmdlet: Get-VMHostFirmware.

This cmdlet creates a tar compressed archive containing all ESXi host's configurations. Conversely, to recover a backupped configuration, Set-VMHostFirmware cmdlet is used.

In this blog post I provide a PowerCLI module that will allow you to backup, and eventually restore, ESXi hosts configurations.

This module uses Get-VMHostFirmware and Set-VMHostFirmware cmdlets introducing the possibility to pass more than a single host as source for backup or target for restore and automatically enters/exits each host into/from maintenance mode before and after a backup restore occurs.

Let's start by briefly explaining how to use the module:

Typically the first step is to connect to a vCenter Server via PowerCLI in order to be able to perform backup or restores of one or more vCenter registered hosts. PowerCLI connection to a single ESXi host is also supported but for obvious reasons you can backup/restore only that specific host.

Once downloaded the script provided below you will have a .psm1 file, which is the common extension for PowerShell modules, that must be imported into PowerCLI in order to use it.

 Import-Module C:\Users\Paolo\WindowsPowerShell\Modules\BackupRestore  

Where C:\Users\Paolo\WindowsPowerShell\Modules\ is the path of BackupRestore.psm1 file on your PC.

BackupRestore module introduces into PowerCLI two new functions: Backup-VMHost and Restore-VMHost.

Backup-VMHost requires as mandatory parameters:

-VMHost: backup source. IP address or FQDN of one or more ESXi hosts you want to backup configurations from.
-FilePath: location where configuration bundles will be saved.

The following example backups the configuration of host 192.168.243.143 and 192.168.243.144 then save their configurations into C:\Users\Paolo\Desktop.

 Backup-VMHost -VMHost 192.168.243.143,192.168.243.144 -FilePath C:\Users\Paolo\Desktop  


Restore-VMHost requires:

-VMHost: Restore destination. IP address or FQDN of one or more ESXi hosts you want to recover configurations to.

-FilePath: location where configuration bundles can be fetched in order to be restored on host(s)

-HostUsername: ESXi host username

-HostPassword: ESXi host password

The following command will first place each host into maintenance mode then restore configuration bundle on ESXi host 192.168.243.143 and 192.168.243.144 taking it from C:\Users\Paolo\Desktop folder.

By default configuration bundles are saved as configBundle-<ESXi_host_IP_address>.tar (for example: configBundle-192.168.243.143.tar) so Restore-VMHost function expects such named files to be present in source folder.
Finally it will wait a few minutes (3 by default), giving to each host time to perform a reboot, then removes host from maintenance mode.

 Restore-VMHost -VMHost 192.168.243.143,192.168.243.144 -FilePath C:\Users\Paolo\Desktop -HostUsername root -HostPassword vmware  


As usual this code is also available on my GitHub repository: BackupRestore.psm1



That's all!!
          VMware: VSAN Part4 - Automate VSAN using PowerCLI        
VSAN deployment can be automated using PowerCLI. PowerCLI Extensions must be installed in order to add VSAN & vFRC cmdlets to PowerCLI.

As explained in Automating vFRC deployment with PowerCLI  post a few steps are required in order to register the VMware.VimAutomation.Extensions module.

After that several new cmdlets become available:

Get-VsanDisk  
Get-VsanDiskGroup
New-VsanDisk
New-VsanDiskGroup
Remove-VsanDisk
Remove-VsanDiskGroup

The following script allows you to automate the creation of a VSAN enabled cluster in just one click.

Here the steps performed by the script:

-Import VSAN cmdlets in PowerCLI session in order to use new cmdlets.
-Connect to vCenter Server.
-Create a Datacenter.
-Create a VSAN enabled Cluster.
-Insert and assign license to VSAN cluster (this is optional but in my vLab I was not able to claim VSAN disks without prior licensing VSAN solution).
-Add all hosts participating in VSAN cluster.
-Add a VSAN vmkernel to each host vSwitch.



Prior launching PowerCLI script make sure you correctly set required variables.

Here is the script, I've also added it on my GitHub repository:

Download Automating VSAN.ps1 from GitHub

 #Registering VSAN PowerCLI module  
$p = [Environment]::GetEnvironmentVariable("PSModulePath")
echo $p #Show your current path to modules
$p += ";C:\Users\Paolo\WindowsPowerShell\Modules" #Add your custom location for modules
[Environment]::SetEnvironmentVariable("PSModulePath",$p)
#Variable declaration
$vCenterIPorFQDN="192.168.243.40"
$vCenterUsername="Administrator@vsphere.local"
$vCenterPassword="vmware"
$DatacenterFolder="DCFolder"
$DatacenterName="VSANDC"
$ClusterName="NewCluster"
$VSANHosts= @("192.168.243.137","192.168.243.142","192.168.243.141") #IP or FQDN of hosts participating in VSAN cluster
$HostUsername="root"
$HostPassword="mypassword"
$vSwitchName="vSwitch0" #vSwitch on which create VSAN enabled vmkernel
$VSANvmkernelIP= @("10.24.45.1","10.24.45.2","10.24.45.3") #IP for VSAN enabled vmkernel
$VSANvmkernelSubnetMask="255.255.255.0" #Subnet Mask for VSAN enabled vmkernel
$vsanLicense="XXXXX-XXXXX-XXXXX-XXXXX-XXXXX" #VSAN License code
Write-Host "Importing PowerCLI VSAN cmdlets" -foregroundcolor "magenta"
Import-Module VMware.VimAutomation.Extensions
Write-Host "Connecting to vCenter" -foregroundcolor "magenta"
Connect-VIServer -Server $vCenterIPorFQDN -User $vCenterUsername -Password $vCenterPassword
Write-Host "Creating Folder" -foregroundcolor "magenta"
Get-Folder -NoRecursion | New-Folder -Name $DatacenterFolder
Write-Host "Creating Datacenter and Cluster" -foregroundcolor "magenta"
New-Cluster -Location (
New-Datacenter -Location $DatacenterFolder -Name $DatacenterName
) -Name $ClusterName -VsanEnabled:$true -VsanDiskClaimMode Automatic
$i = 0 #Initialize loop variable
Write-Host "Licensing VSAN cluster" -foregroundcolor "magenta"
#Credits to Mike Laverick - http://www.mikelaverick.com/2013/11/back-to-basics-post-configuration-of-vcenter-5-5-install-powercli/
$datacenterMoRef = (Get-Cluster -Name NewCluster | get-view).MoRef
$serviceinstance = Get-View ServiceInstance
$LicManRef=$serviceinstance.Content.LicenseManager
$LicManView=Get-View $LicManRef
$licenseassetmanager = Get-View $LicManView.LicenseAssignmentManager
$licenseassetmanager.UpdateAssignedLicense($datacenterMoRef.value,$vsanLicense,"Virtual SAN 5.5 Advanced")
foreach ($element in $VSANHosts) {
Write-Host "Adding" $element "to Cluster" -foregroundcolor "magenta"
Add-VMHost $element -Location $ClusterName -User $HostUsername -Password $HostPassword -RunAsync -force:$true
Write-Host "One minute sleep in order to register" $element "into the cluster" -foregroundcolor "magenta"
Start-Sleep -s 60
Write-Host "Enabling VSAN vmkernel on" $element "host" -foregroundcolor "magenta"
if ($i -le $VSANHosts.Length) {
New-VMHostNetworkAdapter -VMHost (Get-VMHost -Name $element) -PortGroup VSAN -VirtualSwitch $vSwitchName -IP $VSANvmkernelIP[$i] -SubnetMask $VSANvmkernelSubnetMask -VsanTrafficEnabled:$true
}
$i++
}

Other blog posts in VSAN Series:

VSAN Part1 - Introduction
VSAN Part2 - Initial Setup
VSAN Part3 - Storage Policies
VSAN Part4 - Automate VSAN using PowerCLI
          VMware: VSAN Part2 - Initial Setup        
Since VSAN is coded in ESXi 5.5 hypervisor it does not require an installation but an enablement. A VSAN capable cluster must be created and appropriate disks must be claimed by hosts in order to provide capacity and performance to the cluster.

For VSAN testing we need at least three ESXi hosts, each with an unused, unformatted, SSD and HDD. VSAN supports up to a maximum of 1 SSD and 7 HDDs for each host. If you installed ESXi locally on an HDD this one cannot be used for VSAN since it has been formatted with VMFS. 
VSAN, at this moment, allows up to eight ESXi hosts, both "active" or "passive", within the same cluster. As explained in previous article not every host participating in VSAN cluster must have local HDD and SSD (referred by me, for sake of simplicity, as "active host") but we need at least three of them in order to VSAN to work properly since every VM has, by default policy, its vmdks backed by two hosts with a third host acting as witness.

VSAN can also be tested in a virtual lab, no hardware requirements except an hypervisor (ESXi or Workstation) and enough disk space.

For this article purpouse I create a vLab environment for VSAN testing so let's start by creating three ESXi 5.5 hosts.

Each VM on which will be installed ESXi has been configured with:

-VMware ESXi 5.5
-4GB of RAM
-A 2GB HDD for installing ESXi
-A 4GB *fake* SSD for VSAN
-An 8GB HDD for VSAN

Of course you can tune these values according to your needs.

SSDs in nested virtualization are simply virtual disks faked to be recognized by ESXi as SSDs. This can be done following this great article by William Lam: Emulating an SSD Virtual Disk in a VMware Environment.

Another great resource provided by William Lam is a deployment template for a VSAN host. Basically it creates a VM with the aforementioned specifications, so if you don't want to manually configure a VM just dowload William's one.

After ESXi hosts has been installed enter vSphere Web Client and add them to a datacenter.



In order to work VSAN requires a dedicated network for VSAN traffic. A vmkernel is required, when you create/modify it ensure to tick Virtual SAN Traffic checkbox.



The resulting vSwitch will be similar to this one:



Now let's create a VSAN enabled cluster. Cluster creation is the same as for any cluster you already created but in this case we need to tick Virtual SAN checkbox. You can leave Automatic under Add disks to storage in order to automatically reclaim suitable VSAN disks by each host.
DRS & HA can be enabled since fully supported by VSAN.



Add your hosts to the newly created cluster.



VSAN can be managed under cluster's Manage -> Settings -> Virtual SAN. General tab reports VSAN status like used and usable capacity.



Assign VSAN license under Configuration -> Virtual SAN Licensing.



Now let's assign disks to diskgroup. A diskgroup can be seen as a logical container of both SSDs and HDDs resources created aggregating local SSDs and HDDs of each host. SSDs will provide performances and will not be counted as usable space because all writes, after being acknowledged, will be staged from SSDs to HDDs, conversely HDDs will provide capacity.
Click on Claim Disks button.



Claim disks popup window will appear, here will be listed all unused HDDs and SSDs, claimable by VSAN, for each server.

Select them by clicking Select all eligible disks button.



Diskgroup will be created.



New changes will be reflected under General tab. As said before as Total capacity of VSAN datastore will be reported only HDDs provided space.



At this point VSAN cluster is correctly setup, we now need to create a custom storage policy and assign it to our VMs residing on VSAN. This will be explained in Part3.

Other blog posts in VSAN Series:


VSAN Part1 - Introduction
VSAN Part2 - Initial Setup
VSAN Part3 - Storage Policies 
VSAN Part4 - Automate VSAN using PowerCLI
          Health Hints: Whole Grain Pasta Tips        
Have you heard that carbohydrates are all bad? Well here are some tips on how to choose whole grain pastas which will provide you with the “good” carbohydrates! Whole Grain Pasta Tips Source: University of California, Berkeley Once found only in health-food stores, whole-grain pastas have gone mainstream. And with improved technology, many of them are less chewy and gummy than they used to be. That’s great news since whole-grain foods — which retain the bran and germ of the kernel and thus all the fiber and most... Read More →
          88 Unexpected Snacks Under 100 Calories        

We’ve all been there: hunger striking before dinnertime, a sudden craving for something sweet, the need for a quick energy boost before working out. The solution? A small and satisfying snack that won't tip that calorie count over the edge—after all, a quick nibble can easily turn into the calorie equivalent of a full-blown meal. These flavorful, low-calorie treats can please any palate while still leaving room for dinner.

Sweet Snacks

1. Chocolate Banana

100-Calorie Snacks

1/2 frozen banana dipped in 2 teaspoons dark chocolate chips, melted

2. Frozen Grapes

28 grapes (about 1 scant cup), placed in the freezer for 2+ hours

3. Honeyed Yogurt

1/2 cup nonfat Greek yogurt with 1 dash cinnamon and 1 teaspoon honey

4. Mini PB&F

1 Fig Newton with 1 teaspoon peanut butter

5. Spiced Orange

1 medium orange, sprinkled with cinnamon

6. Grilled Pineapple

2 1/4-inch thick pineapple rounds (3 1/2-inch diameter), grilled (or sautéed) for 2 minutes or until golden

7. Berries 'n’ Cream

1 cup blueberries with 2 tablespoons whipped topping

8. Stuffed Figs

3 small dried figs stuffed with 1 tablespoon part-skim ricotta and sprinkled with cinnamon

9. Nuts 'n’ Berries

2/3 cup blueberries sprinkled with 1 tablespoon slivered almonds

10. Dark Chocolate

1/2 ounce (about 1 block or 3 squares)

11. Nut-Stuffed Date

1 medjool date filled with 1 teaspoon natural unsalted almond butter

12. Chocolate Milk

6 ounces skim milk mixed with 2 teaspoons chocolate syrup

13. Cinnamon Applesauce

1 cup unsweetened applesauce, sprinkled with cinnamon

14. Citrus-Berry Salad

1 cup mixed berries (raspberries, strawberries, blueberries, and/or blackberries) tossed with 1 tablespoon freshly squeezed orange juice

15. Maple-Pumpkin Yogurt

1/2 cup nonfat plain yogurt (go Greek for extra protein) mixed with 2 tablespoons pumpkin puree and 1 teaspoon maple syrup

16. Chocolate Pudding

1 4-ounce container fat-free pudding

17. Chocolate-Covered Strawberries

7 strawberries dipped in 1 tablespoon dark chocolate, melted

18. Tropical Juice Smoothie

100-Calorie Snacks

1/4 cup each 100-percent pineapple juice, orange juice, and apple juice, blended with ice

19. Vanilla and Banana Smoothie

1/3 cup sliced banana, 1/4 cup vanilla Greek yogurt, and 1 handful ice, blended until smooth

20. M.Y.O. Banana Chips

1 sliced small banana dipped in lemon juice and baked

21. Baked Apple

1 small apple, cored, filled with 1 teaspoon brown sugar and 1 sprinkle cinnamon, baked until tender

22. Fruity Waffles

1 toasted Kashi 7-Grain Waffle topped with 1/4 cup mixed berries

23. Skinny S’more

2 graham cracker squares with 8 roasted miniature marshmallows and 1 teaspoon dark chocolate chips

24. Cinnamon Graham Crackers and Peanut Butter

2 graham cracker squares with 1 teaspoon peanut butter, sprinkled with cinnamon

25. Cereal and Milk

2/3 cup crisped rice cereal with 1/3 cup skim milk

26. Milk and Cookies

5 animal crackers with 1/2 cup skim milk

27. Warm Spiced Cider

6 ounces apple cider, sprinkled with cinnamon and nutmeg, warmed

28. Fruity Soft Serve

Purée 1 small frozen banana into ice cream

29. Café Latte

8 ounces steamed skim milk with 1 shot espresso

30. Fruit Leather

2 no-sugar-added strips, like Stretch Island Fruit Co.

31. Maple-Cashew Pear

1/2 medium sliced pear dipped into a mix of 1 teaspoon each maple syrup and cashew butter

32. Protein Chai

1 1/2 tablespoons hemp protein powder, 1/2 small frozen banana, and 1/2 teaspoon chai tea mix (from a tea bag) blended with 6 ounces water

33. M.Y.O. Popsicle

6 ounces bottled lemonade, frozen in an ice pop mold or small paper cup

34. Apple Chips

1/2 cup unsweetened, such as Bare Snacks

Savory Snacks

35. Cucumber Salad

100-Calorie Snacks

1 sliced large cucumber tossed with 2 tablespoons chopped red onion and 2 tablespoons apple cider vinegar

36. Pistachios

25 kernels

37. Cheese and Crackers

5 Kashi Original 7 Grain crackers with 1 part-skim mozzarella cheese stick

38. Spicy Scramble Egg

2 scrambled egg whites on 1/2 slice whole-wheat toast, drizzled with 1 teaspoon sriracha

39. Cheesy Breaded Tomatoes

2 roasted plum tomatoes sliced and topped with 2 tablespoons breadcrumbs and sprinkled with Parmesan cheese

40. Curried Sweet Potato

1 small sweet potato microwaved for 6 minutes and mashed with 1 teaspoon curry and salt and pepper to taste

41. “Cheesy” Popcorn

2 cups air-popped popcorn with 1 tablespoon nutritional yeast

42. Guacamole-Stuffed Egg Whites

1 halved hard-boiled egg, yolk removed, stuffed with 2 tablespoons guacamole

43. Grilled Spinach and Feta Polenta

2 slices precooked polenta (look for the tubes in the grocery store) topped with 1 teaspoon feta cheese and 1 handful spinach

44. Soy Edamame

1/3 cup boiled shelled edamame with 1 teaspoon soy sauce

45. Dijon Pretzels

2 pretzel rods with 1 tablespoon Dijon mustard

46. Crunchy Curried Tuna Salad

2 ounces (about 1/4 cup) canned white tuna with 1 teaspoon curry powder, 1 tablespoon chopped red onion, and 2 chopped ribs celery

47. Greek Tomatoes

2 medium tomatoes chopped and mixed with 2 tablespoons feta and 1 squeeze lemon juice

48. Shrimp Cocktail

8 large shrimp with 2 tablespoons classic cocktail sauce

49. Smoked Beef Jerky

1 ounce

50. Cheddar and Tomato Soup

1 cup tomato soup with 1 tablespoon shredded low-fat cheddar cheese

51. Kale Chips

2 cups raw kale (stems removed), tossed with 1 teaspoon olive oil and baked at 400 degrees until crisp

52. Sweet Potato Fries

1 light bulb-sized sweet potato, sliced, tossed with 1 teaspoon olive oil, and baked at 400 degrees for 10 minutes

53. Cucumber Sandwich

1/2 English muffin with 2 tablespoons cottage cheese and 3 slices cucumber

54. Turkey Roll-Ups

2 slices smoked turkey rolled up and dipped in 2 teaspoons honey mustard

55. Wasabi Peas

100-Calorie Snacks

1/4 cup

56. Antipasto Plate

3 pepperoncini, 1/2-inch cube cheddar cheese, 2 slices pepperoni, 2 extra-large olives

57. Pumpkin Seeds

2 tablespoons pumpkin seeds, spritzed with oil, and baked at 400 degrees for 15 minutes or until brown, sprinkled with kosher salt

58. Choco-Soy Nuts

3 tablespoons soy nuts with 1 teaspoon cocoa nibs

59. Mixed Olives

8 large olives

60. Balsamic Veggies

3 cups raw peppers, sliced, dipped in 2 tablespoons balsamic reduction

61. Cheesy Roasted Asparagus

6 spears, spritzed with olive-oil spray, sprinkled with 2 tablespoons grated Parmesan cheese, and baked at 400 degrees for 10 minutes

62. Carrots and Hummus

12 medium baby carrots with 2 tablespoons hummus

63. Spinach and Feta Egg White Scramble

3 scrambled egg whites mixed with 1/2 cup raw spinach and 1 tablespoon feta cheese, cooked over the stove or in the microwave until egg whites are no longer runny

64. Crunchy Kale Salad

2 cups chopped kale leaves tossed with 1 teaspoon honey and 1 tablespoon balsamic vinegar

65. Chickpea Salad

1/3 cup chickpeas tossed with 1 tablespoon sliced scallions, 1 squeeze lemon juice, and 1/4 cup diced tomatoes

66. Grilled Garlic Corn on the Cob

1 small corn cob brushed with 1 teaspoon sautéed minced garlic and 1 teaspoon olive oil, grilled until tender

67. Pretzels and Cream Cheese

15 mini pretzel sticks with 2 tablespoons fat-free cream cheese

68. Bacon Brussels Salad

7 thinly sliced Brussels spouts mixed with 1 crumbled piece turkey bacon

69. Rosemary Potatoes

1/3 cup thinly sliced potato tossed with 1 teaspoons olive oil and 1 teaspoon chopped fresh rosemary

70. Spicy Black Beans

1/3 cup black beans with 1 tablespoon salsa and 1 tablespoon nonfat Greek yogurt

71. Caprese Salad

1 ounce (about 1 hockey puck) fresh mozzarella with 1/3 cup cherry tomatoes and 2 teaspoons balsamic vinegar

72. Goldfish Crackers

38 crackers

73. Chips and Salsa

10 baked tortilla chips with 1/4 cup salsa

74. Mini Ham Sandwich

2 slices honey-baked ham with 2 teaspoons honey mustard, rolled in 1 lettuce leaf

75. Lox Bagel

1/2 whole-wheat mini bagel with 1 ounce (2 thin slices) lox

Sweet and Salty Snacks

76. Apples and Peanut Butter

100-Calorie Snacks

1/2 slice apple dipped into 1/2 tablespoon natural peanut butter

77. Apples and Cheese

1 light mozzarella cheese stick with 1/2 sliced medium apple

78. PB & Celery

1 medium (about 6 inches long) celery stalk with 1 tablespoon peanut butter

79. Cottage Cheese Melon Boat

3/4 cup melon balls with 1/2 cup nonfat cottage cheese

80. Carrot and Raisin Salad

1 cup shaved carrots with 1 1/2 tablespoons raisins and 1 tablespoon balsamic vinegar

81. Tropical Cottage Cheese

1/2 cup nonfat cottage cheese with 1/4 cup each chopped mango and pineapple

82. Blue Cheese-Stuffed Apricots

3 dried apricots with 1 tablespoon crumbled blue cheese

83. Rice Cake and Almond Butter

1 rice cake (try brown rice!) with 2 teaspoons almond butter

84. Sweet 'n’ Spicy Pecans

5 pecans roasted with 2 teaspoons maple syrup and 1 teaspoon cinnamon

85. Chocolate Trail Mix

8 almonds, 1/2 tablespoon chocolate chips, and 1 tablespoon raisins

86. Chocolate-Hazelnut Crackers

5 Wheat Thins dipped in 1/2 tablespoon Nutella or other hazelnut spread

87. Strawberry Salad

2 cups raw spinach with 1 cup sliced strawberries and 1 tablespoon balsamic vinegar

88. Cacao-Roasted Almonds

8 nuts

Originally posted December 2012. Updated July 2015.

Feature Image: 
100-Calorie Snacks
NEXT: 
27 Healthy and Portable High-Protein Snacks
Share Counts: 
229
0
21 014
66 693
781 299
847 603
Pinterest Hashtags: 
lowcal healthy snacks
Ad Killswitch: 
0
Article Subtype: 
SEO Title: 
Low-Calorie Snacks: 88 Unexpected Snacks Under 100 Calories
No Index: 
article height: 
x-tall
Promotion Killswitch: 
Add To MSN Feed: 
Teads Killswitch: 
Partner Pixel Tracking: 
Replacement Type: 
timestamp
Hide Right Rail: 
Hide Partner Disclaimer: 
MSN feed date: 
Friday, January 27, 2017
AMP Killswtich: 

          Beware of the mouse – if you’re sowing peas and beans, that is        

Packets of peas and beans generally come with instructions to sow them in the ground where they are to grow. It sounds good enough, but it puzzles me because the instructions don’t take into account a certain small mammal, the mouse - Apodemus sylvaticus.



Field mouse, wood mouse, call them what you will, but they love peas and beans and can sniff them out as fast as you sow them. They love sweet corn, too, and will neatly lift every carefully sown kernel, leaving barely a trace of their foraging. For this reason, I prefer to sow into trays and then keep them on metal racks that mice can’t climb up, until they have put out at least one set of leaves and can be safely put outside. I could, of course, trap and kill them, and many gardeners do, but I choose not to.

Mice are mostly nocturnal, so you don’t often see them, but they leave signs of their presence. If you have seed or berry producing trees nearby, like cherry or holly, it’s likely that a mouse will gather the seed and store it somewhere, to be eaten later; the corner of a dry garage is a favourite spot. The inside of a wood pile is a good storage area too – in ours we find many cherry stones. Once, I even found a disused bird’s nest piled high with holly berries.

Read more


          åœ¨RedHat 5下安装Oracle 10g详解(转)        
一、安装环境
我是在vmware里虚拟的RHEL5,分配的内存1G,SWAP分区1G,建议你最好把SWAP分区改成2G。否则安装时,到了测试的步骤会提示监测失败。(当然,如果你强行安装一样没问题)

二、安装Oracle 10g Release2 前的配置

1. 安装Oracle 10g R2所需的软件包

# cd /mnt/cdrom/Server/
# rpm -Uvh setarch-2*
# rpm -Uvh make-3*
# rpm -Uvh glibc-2*
# rpm -Uvh libaio-0*
# rpm -Uvh compat-libstdc++-33-3*
# rpm -Uvh compat-gcc-34-3*
# rpm -Uvh compat-gcc-34-c++-3*
# rpm -Uvh gcc-4*
# rpm -Uvh libXp-1*
# rpm -Uvh openmotif22-*
# rpm -Uvh compat-db-4*

其中除了openmotif22-2.2.3-18和compat-db-4.2.52-5.1在第三张盘上,其余的包都在第一张盘上。

2. 修改/etc/redhat-release文件

因为Oracle 10g官方只支持到RHEL4为止,所以需要更改版本说明,编辑/etc/redhat-release文件,删除Red Hat Enterprise Linux Server release 5 (Tikanga),改为redhat-4

3.修改内核参数

#vi /etc/sysctl.conf

kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default=262144
net.core.rmem_max=262144
net.core.wmem_default=262144
net.core.wmem_max=262144

使更改立即生效,使用下面的命令:

#sysctl -p

4.建立安装Oracle需要的用户,组,及目录

#groupadd oinstall
#groupadd dba
#groupadd oper
#useradd -g oinstall -G dba oracle
#passwd oracle

#mkdir /oracle
#chown -R oracle:oinstall /oracle
#chmod -R 775 /oracle

安装oracle的目录建议安装在一个单独的分区或者磁盘上。

5.设置oracle用户的shell limit

#vi /etc/security/limits.conf    

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536

接下来更改/etc/pam.d/login文件,添加下面的内容,使shell limit生效:

#vi /etc/pam.d/login

session                  required                pam_limits.so

6.配置IP地址

安装RHEL的时候最好采用静态IP地址,如果当时选择的是DHCP,现在需要更改/etc/sysconfig/network-scripts/ifcfg-eth0文件

[root@TSM54-Test network-scripts]# cat ifcfg-eth0
# Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]
DEVICE=eth0
BOOTPROTO=static
HWADDR=00:0C:29:4B:17:C4
ONBOOT=yes
IPADDR=192.168.68.98
NETMASK=255.255.255.0
GATEWAY=192.168.68.10

 

7.配置oracle用户的环境变量

下面的操作,该用oracle用户登陆执行了。

为了防止安装oracle时出现乱码,先把语言环境改为英文,在终端里输入:

[oracle@TSM54-TEST ~]$export LC_CTYPE=en_US.UTF-8

接下来,编辑/home/oracle目录下的.bash_profile文件,添加如下内容:

export ORACLE_BASE=/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORACLE_SID=orcl
export PATH=$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export CLASSPATH


if [ $USER = "oracle" ]; then
        if [ $SHELL = "/bin/ksh" ]; then
                ulimit -p 16384
                ulimit -n 65536
        else
                ulimit -u 16384 -n 65536
        fi
fi

三.安装Oracle 10g

我把10201_database_linux32.zip放在/opt目录下,并通过

#unzip 10201_database_linux32.zip解压,现在使用oracle用户到/opt/database目录下执行安装:

[oracle@TSM54-Test opt]$ cd /opt/database/
[oracle@TSM54-Test database]$ ls
doc install response runInstaller stage welcome.html
[oracle@TSM54-Test database]$ ./runInstaller

1.选择安装类型,这里我选择高级安装,如下图所示:

2.指定证书存放目录,如图所示:

3.选择安装的数据库类型及oracle所支持的语言,这里选择企业版,语言选择英文和简体中文,如下图所示:

4.指定oracle环境变量,安装路径。因为我们在.bash_profile中已经声明,所以这里会自动填充.如下图所示:

5.Oracle开始进行安装前的检查工作,如下图所示:

6.选择配置选项,如下图所示:
7.选择创建的数据库模式,如下图所示:
8.指定数据库配置的相关选项(SID、字符集等),如下图所示:
9.选择数据库管理选项,如下图所示:
10.指定数据库存储选项,如下图所示:
11.指定数据库备份回复选项,如下图所示:
12. 指定数据库相关用户密码,如下图所示:
13.显示安装概要,如下图所示:
14.开始安装,如下图所示:
15. 上面的窗口点击OK后,会出现下图显示的内容:
需要root权限执行
#/oracle/oraInventory/orainstRoot.sh
#/oracle/product/10.2.0/db_1/root.sh
16.安装结束,如下图所示:
17.安装完成后,恢复语言环境以及版本信息
#export LC_CTYPE=zh_CN.UTF-8
#vi /etc/redhat-release
Red Hat Enterprise Linux Server release 5 (Tikanga)
四、后续
1.安装完成后,首先应该启动监听器。
监听器接受客户端的连接请求,并在验证证书后创建数据库连接。要使用OEM或iSQL*PLUS,必须先启动监听器。

[oracle@TSM54-Test database]$ lsnrctl start
[oracle@TSM54-Test database]$ lsnrctl stop

2.使用Oracle Enterprise Manager 10g进行数据库控制

启动和停止OEM的命令为:

[oracle@TSM54-Test database]$emctl start dbconsole
[oracle@TSM54-Test database]$emctl stop dbconsole

 

在web浏览器中,输入:

http://192.168.68.98:1158/em (如果服务器没有进行DNS解析,则可以用IP地址)

用户名:SYS

口令:<安装过程中建立的口令>

连接为:SYSDBA

3.使用iSQL*Plus访问数据库

启动和停止iSQL*Plus命令:

[oracle@TSM54-Test database]$isqlplusctl start
[oracle@TSM54-Test database]$isqlplusctl stop

iSQL*Plus是历史悠久的SQL*Plus交互式工具的基于web的版本,用于访问数据库。要使用iSQL*Plus,请单击OEM控制台相关连接部分中的iSQL*Plus链接,或将浏览器指向安装过程中提供的iSQL*Plus URL。

在web浏览器中,输入:

http://192.168.68.98:5560/isqlplus

用户名:SYSTEM

口令:<安装过程中创建的口令>

连接标识:orcl

4.启动和停止数据库

启动和停止数据库的最简单方法是从 OEM 控制台启动和停止。要从命令行执行此操作,请在以 oracle 身份登录后使用 SQL*Plus,如下所示:

启动:

$ sqlplus

SQL*Plus:Release 10.1.0.2.0 - Production on Sun Jun 13 22:27:48 2004

Copyright (c) 1982, 2004, Oracle.All rights reserved.

Enter user-name:/ as sysdba
Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area  188743680 bytes
Fixed Size                   778036 bytes
Variable Size             162275532 bytes
Database Buffers           25165824 bytes

Redo Buffers                 524288 bytes
Database mounted.
Database opened.
SQL> exit

Shutdown:

$ sqlplus

SQL*Plus:Release 10.1.0.2.0 - Production on Sun Jun 13 22:25:55 2004

Copyright (c) 1982, 2004, Oracle.All rights reserved.

Enter user-name:/ as sysdba

Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
注:本文的图是借用了http://www.ouyaoxiazai.com/article/24/239.html里的图片。
五、卸载ORACLE
1. 运行 $ORACLE_HOME/bin/localconfig delete
2. rm -rf $ORACLE_BASE/*
3. rm -f /etc/oraInst.loc /etc/oratab
4. rm -rf /etc/oracle
5. rm -f /etc/inittab.cssd
6. rm -f /usr/local/bin/coraenv /usr/local/bin/dbhome /usr/local/bin/oraenv
7. 删除oracle用户和组。
注:卸载方法是来自google。我曾用此方法删除oracle 11g(安装11的时候选择的是ASM)。
六、关于Oracle 11g Release 1 的安装
Oracle 11gR1 已经支持RHEL5了,所以不用再更改redhat-release文件。
另外安装所依赖的包有如下:

      binutils-2.17.50.0.6-2.el5
      compat-libstdc++-33-3.2.3-61
      elfutils-libelf-0.125-3.el5
      elfutils-libelf-devel-0.125
      glibc-2.5-12
      glibc-common-2.5-12
      glibc-devel-2.5-12
      gcc-4.1.1-52
      gcc-c++-4.1.1-52
      libaio-0.3.106
      libaio-devel-0.3.106
      libgcc-4.1.1-52
      libstdc++-4.1.1
      libstdc++-devel-4.1.1-52.e15
      make-3.81-1.1
      sysstat-7.0.0
      unixODBC-2.2.11
      unixODBC-devel-2.2.11

其它的步骤就跟安装Oracle 10gR2 没有什么区别了。



疯狂 2012-09-14 13:26 发表评论

          5 Aplikasi Pencatatan Waktu Startup Time        


5 Aplikasi Pencatatan Waktu Startup Time - Selama periode waktu, saat anda menginstal beberapa program pada komputer bersistem Windows. Banyak program-program menambahkan entri ke item startup dan menambahkan Services untuk Windows, yang harus dijalankan ketika Windows start.

Hal ini menyebabkan perlambatan PC Windows 8, dalam hal waktu boot sertai kinerja, karena terlalu banyak program yang ingin dijalankan. Anda dapat memanfaatkan utility built-in msconfig, atau beberapa freeware bagus seperti WinPatrol atau Ccleaner untuk menghapus, menonaktifkan atau mengatur program startup anda.

Tapi jika anda perlu untuk mengukur waktu booting atau yang diperlukan Windows 8 untuk start up, anda perlu menggunakan Windows Assessment dan Deployment Toolkit, atau dapat memeriksa beberapa dari ini perangkat aplikasi gratis yang memungkinkan untuk melakukannya dengan mudah.

Windows Boot Timer
Windows Boot Timer sendiri dalam memori katika anda memulaikomputer anda dan langkah-langka total waktu boot sistem. Setelah semua proses sistem telah dimuat, utilitas menghapus dirinya dari memori sistem dan menampilkan total waktu boot.

Tidak diperlukan instalasi. Yang perlu anda lakukan adalah klik dua kali pada executable & saat restart, ia akan menampilkan waktu yang dibutuhkan oleh komputer untuk memuat Windows, Tetapi saat ke BIOS waktunya tidak dihitung. 

BootRacer


BootRacer akan membiarkan anda mengukur waktu yang dibutuhkan untuk komputer Windows anda untuk boot. Fungsi utama dari BootRacer adalah total kontrol atas waktu boot Windows.

AppTimer
AppTimer merupakan freeware yang akan menjalankan eksekusi, sejumlah pre-set kali dan kemudian mengukur waktu yang dibutuhkan untuk start-up setiap kali. Ini mengukur waktu hingga negara di mana input pengguna sedang diterima sebelum keluar dari aplikasi. Setelah masing-masing berjalan AppTimer akan menutup aplikasi dalammode otomatis, sebelum restart lagi.

Saluto
Saluto tool yang tidak hanya akan mengukur waktu boot anda, tetapi juga membantu anda mengoptimalkan waktu boot lebih lanjut. Perusahaan ini mempekerjakan tingkat rendah teknologi Windows kernel inovatif untuk mengidentifikasi apa yang pengguna minta PC mereka lakukan, dan apa yang PC mereka sebagai balasannya.

Boot Analyzer
MasS360 Boot Analyzer akan memberikan informasi rinci tentang ktivitas boot komputer anda. Memiliki bersih dan mudah memahami antarmuka. Jendela utama menampilkan grafik dengan rincian tentang tanggal dan waktu ketika mode boot diaktifkan. Sementara saat mengukur waktu boot, dia akan menyebutkan jumlah boot. Hal ini juga mempertahankan history waktu boot anda sebelumnya. 

Bila anda tertarik dengan beberapa aplikasi gratisan tersebut, bisa langsung mengunjungi ke http://www.softpedia.com/get/System/System-Info/MaaS360-Boot-Analyzer.shtml. Demikian informasi mengenai 5 Aplikasi Pencatatan Waktu Startup Time, semoga bermanfaat bagi Anda.


          Letter U - Under the Sea Tank- (Recycle Earth Day)        

Under the Sea Fish TankI was not sure if it was letter T week again or the letter U...

My daughter's teacher told me they would continue last week's activities about the Ocean, about Earth Day and that they would go on a field trip to the beach and do some research and snorkeling, so we came up with this
Under the Sea Fish Tank made with Recycled Materials to celebrate Earth Day!

(This is another variant of the Kijkdoos I explained last week...)

Working together was so much fun and this project was so easy to do!
All this was made with stuff I had at home....look at the details...

  • A shoebox...
  • This pattern I created and I want to share with you for download and use.
Under the Sea Fish Tank






NOTE:

You can download this template and use it anytime, but please don't copy it anywhere else without my permission. I worked on it and a simple link to this page would be a great way to share it. Thanks!!!

























The Top Sign was made with the back of the shoebox. I cut a piece of the box before placing the blue tissue paper so the light would come trough. We glued 2 Goldfish crackers and my daughter wrote the word TANK.


Seastars - Under the Sea Fish Tank

Crab - Under the Sea Fish TankThe Red Crab is made with Farfalle Pasta (Bow Tie pasta) and red paint. Glued some eyes and done.

And the Sea Stars with Stelline Pasta.
The Sea Algae (Seaweed) was made with Raffia.


Popcorn - Under the Sea Fish Tank























In the corner you can see some Corn Kernels. These are our Sea Plants!

The Rocks are little stones from our backyard.

Glitter gives the water a bubbling effect.

Walnut - Under the Sea Fish Tank
The Sea Shells are Cavatelli Pasta and the Coral Reef are pieces of Walnut

Stones - Under the Sea Fish Tank
After we places everything inside the box, I used some thread to hang the fish.
Are you ready to go Scuba diving???

Pattern Color Under the Sea Fish Tank
We used these real cool markers called Elmer's Kids Arts and Crafts Paintastics Color Changing Markers
A great way to draw and create with no mess. These paint brush pens feature vivid, true color paint right in the brush, so there is no need for jars, water or separate brushes. The washable paint in the pens dries instantly for less smudging.

These color changing pens paint in one color, then can be changed to two more colors by using the color Magic Wand.

Simply rub the Magic Pen over the other 5 pens to produce instant color changes. Each pen can produce 3 different colors!


          Kaggle’s Data Science Community to Solve Public Problems with Commerce Open Data        

Quick quiz:

  • Which state has the highest percentage of working moms?
  • Who’s more employed—people with bachelor’s degrees or doctorates?
  • Who earns more income—people who get to work at 7 am, 8 am or 7 pm?
Counselor Justin Antonipillai and Kaggle CEO  Anthony Goldbloom.

When we think about the information that the American people should have at our fingertips to make decisions about the way we live and work, the above data is exactly the kind that needs to be accessible and available. And when Commerce—“America’s Data Agency”—issued a call to the private sector to get our data out to those who are not accessing it directly, these were exactly the kinds of answers we were looking for. You have seen our prior call for help for the public good, and Kaggle is one of those companies that have stepped up to the challenge.

With Commerce public datasets loaded on to the Kaggle platform, you can find the answers to the above questions. In fact, “Kagglers”—members of the Kaggle community—analyzed data from the Census Bureau’s American Community Survey, the nation’s premier source for information about America’s changing population, housing and workforce, to challenge conventional wisdom with these answers.

Kaggle has committed to putting valuable Commerce datasets in front of its global community of data scientists, developers and coders. Making public data more open and accessible in this way helps democratize our data, promote data equality, and show what’s possible when the private and nonprofits sectors collaborate to take public data and run with it to address public problems.

Kaggle’s Response to the Challenge of Data Inequality

As Anthony will tell you, Kaggle’s mission is to help the world learn from data, making it easier for researchers, data scientists, and hobbyists to work collaboratively on reproducible projects by allowing data, code, and discussion to live and grow in a single ecosystem.

Responding to the Department’s call to address data inequality, Kaggle has committed to taking a series of publicly available Commerce datasets from the US Patent and Trademark Office and the US Census Bureau and others, and challenge the Kaggle community to solve public problems. Kagglers will be challenged to analyze innovation, creativity, and technological progress in the United States, and dig deeply into the stories of how Americans live and work to uncover insights about our country.

And, how does putting Commerce datasets on the Kaggle platform and before the Kaggle community help address data inequality?

First, by publishing datasets into an active data science community of around 700,000 Kagglers, where sharing insights, analytic approaches or methods and learning is the norm, there is a real opportunity to bring insights from this data to people, charities, nonprofits and small companies around the country. In addition to data, the Kaggle platform offers conversational threads, visual stories and a repository of documented code to accompany datasets prepared for analysis.

Second, Kaggle also runs machine-learning competitions in domains ranging from the diagnosis of diabetic retinopathy to the classification of galaxies, and brings together machine-learning veterans and students with varied academic and professional backgrounds. Datasets shared on Kaggle enable data scientists, researchers, and others who work with data, to find and share anything from civic statistics to European soccer matches for open community collaboration. This permits combining consistent access to public data with reproducible analysis, visibility of results, and conversations on forums with others interested in the data.

The ability to combine our Commerce data with other public data sets could bring insights that may not exist in our data alone.

Third, the in-browser analytics platform, Kaggle Kernels, will allow open analysis, visualization, and modeling of the Commerce data sets, as you’ll see illustrated below. Each Commerce dataset will be accompanied by a repository of code and insights, which enables quick learning and active contribution by the whole community.

The goal of all of this is to enable data scientists to find critical insights in our data and share them with the American people.

Kaggle will post more Commerce public datasets soon. We look forward to giving you an update—and of course, getting your thoughts, insights and comments.

– Justin and Anthony

PS: Here are the answers to the quiz at the top of this blog:

Kaggle Kernel, involving over 11,000 data scientists, found that Americans who  start their day around 8 am earn the most.

Kaggle Kernel, involving over 11,000 data scientists, found that Americans who start their day around 8 am earn the most.

This Kaggle Kernel  investigated whether it  pays to pursue a PhD and the best states to find a job post-degree. The analysis has received over  30,000 views and nearly 90 other data scientists have created reproducible  forks of the code.

This Kaggle Kernel investigated whether it pays to pursue a PhD and the best states to find a job post-degree. The analysis has received over 30,000 views and nearly 90 other data scientists have created reproducible forks of the code.

One working mother and data scientist uses the rich data provided by the Census Commerce American Communities Survey to explore the stories of American working moms in this Kaggle Kernel viewed by over 14,000 people.

One working mother and data scientist uses the rich data provided by the Census Commerce American Communities Survey to explore the stories of American working moms in this Kaggle Kernel viewed by over 14,000 people.


          absinthe tea - herbal tea of wormwood, licorice, anise and mint - organic, fair trade loose herbal tea bursting w/old world charm by pixxxiepieandposie        

8.75 USD

A relaxing herbal tea from another dimension some would say. A hypnotically delicious cup of awesome tea-time wonder is what I like to call it right before I drink it on moody, stormy nights.

Please note that I'm awaiting my usual corked glass bottles, so until they arrive I will be shipping this tea in the corked test tubes seen in product photo number 4.

*´¨)
¸.·´¸.·*´¨) ¸.·*¨)
(¸.·´ (¸.·`Pixxxie Pie's Absinthe Tea is like nothing you've ever tasted & I am proud to present it as the fourth of my "Genuine Pixxxie Pressed Teas & Tinctures" collection. This practically magical tea has been a guarded recipe in my book of tasty shadows for quite some time and after much deliberation I've finally decided to share it with the world.

Comprised of wholesome, fair trade, organic ingredients it is an exotic melding of richly cultivated flavors.

♥-----------The headiest components here are luscious, high grade, organic Wormwood, Spearmint, Licorice and finely ground Lemon peel of identical caliber.

♥-----------Next on our alchemical component scroll is a mixed allotment of organic Peony White Tea leaves, Kukicha Twigs, a light sprinkling of Lavender Buds and whole Star Anise Pods.

♥-----------Rounding off all this goodness is the semi-precious wonderment that is Roasted Brown Rice. Like a velvet ribbon accent on a present that is already sinfully fantastic, it adds a certain richness that is distinct yet smooth. It's an exotic flavor, almost bordering on the roasted tastiness one finds with fresh Sesame Crackers.

♥-----------All of these components meld together to create a cup of one of the most stunning green teas you've ever tasted. To say this is the stuff of which wild daydreams are made of would be putting it lightly. The taste is unmistakably Absinthe with lush herbal tones melding effortlessly with lavish spicy notes thanks to the inclusion of whole Star Anise Pods and dapper Licorice.

I must say, do consider avoiding moonlit walks after gulping this down, for it can make you easy prey for all manner of night's creatures.

*´¨)
¸.·´¸.·*´¨) ¸.·*¨)
♥-----------(¸.·´ (¸.·`How Much You Receive------♥

You receive 1.5 oz of Geniune Pixxxie Pressed Tea. This makes about 3-5 cups of tea depending on the size of your tea cups.

I commonly use one of these bags in my personal tea pot and pull at least 6 cups from it since this is one of the rare loose teas that you can steep in hot water 2 times over and still get a delicious cup of tea that's not bitter from "over steeping".

♥----Tea is sent in a corked test tube with constituents layered in the lovely strata you see pictured. If you'd like the tea to come to you mixed please leave me a note in the Etsy buying notes box with the simple statement "Please Mix Tea Components".

♥----Product will come to you in gift-style wrappings, packaged securely with love and care.

♥----You will also find 2 re-usable drawstring muslin tea bags of cotton nestled alongside your tea. Despite sending you the little bags I really suggest using a tea pot with an internal basket or a French Press if you have one.

*´¨)
¸.·´¸.·*´¨) ¸.·*¨)
(¸.·´ (¸.·`--------------Notables------♥

-♥-Product contains caffeine.

-♥-All ingredients here are certified organic and fair trade.

-♥-Roasted Brown Rice is "puffed" and some grains puff out and expand like wild, giving them the look of small popcorn kernels.

-♥-Ingredients are super fresh, so this will probably be one of the freshest cups of tea you will ever taste. Very different from tea purchased at the grocery store that may have been sitting on the shelf for a long time. Drink lightly to gauge you tolerance since this strong stuff.

-♥-Tea lasts longer if you store it in a cool dry place that is not exposed to direct sunlight. This tea has a rather long shelf life but it tastes best if you drink it within 60 days of receiving it.

-♥-You can also jazz this tea up a bit more by adding whipped cream or milk. Or serve it over ice for the days when you'd rather have your tea cold.

-♥-Steeping Time: 3-4 Minutes in boiling water. Sugar to taste. Tea brews up to a pale golden color under normal circumstances. The green hue shown in many of the product pictures is accomplished through the use of Pixxxie Pie's Green Sugar. This is simply sugar that has had a small dab of natural green food coloring added to it.

-♥- 1.5 Oz Corked Bottle Makes 3-6 cups depending on how strong you brew your tea. Amount of Wormwood herb in this mix is safe and is very low. Less than half a teaspoon is used. Keep in mind that herb of Wormwood has a slightly bitter taste. Some clients like to mix this tea with other teas like mint tea or green herbal tea to soften the flavor of the wormwood while still taking advantage of it's therapeutic and/or medicinal properties.

-♥-Product labels and such are produced in-house & are completely composed of recyclable parchment paper and the like. Peel off labels and keep tags to tuck away as keepsakes or bookmarks (they also attract fairy-folk)! Keep bottles for another use since its high quality and can easily be washed. Upcycling is your friend!

Products/Photos/Label Design © Pixxxie Pie & Posie


          Bullshit Medicine        
Today I was in the ER talking with one of the pediatric residents about jobs. We are both in the same boat, in that we are in the final year of our fellowships and are both studying for the licensing exams and looking for jobs (not to mention working well more than full-time). I suggested to Dr. F that she should become a child psychiatrist, as there is a huge need for them and she could easily get a job. A nearby nurse looked up and, not knowing I'm chief resident for Psychiatry, said, "Yeah, if you want to practice Bullshit Medicine!"

The problem is that there is a kernel of truth to that statement that makes me feel squicky inside. The reason I was in the ER in the first place was to transfer a kid from our ER to a psychiatric hospital for an assessment, a task which ordinarily takes 1-2 hours. It took me 4, simply because she was from a different county so that, instead of following the usual routine, I had to wade through mires of bullshit to find out where to send this girl and then get that hospital to accept the transfer. Meanwhile, what I was sending her for- an assessment of suicidality- was something I do weekly in that very ER and am fully competent and qualified to perform. I wasn't performing it in this case solely because the girl was 13 instead of 11-and-under and was from the wrong county. We only get paid by our county, and then only for kids 11-and-under. All others get shipped out.

There is an element of warehousing and bureaucracy inherent to crisis mental health work that is repugnant. The security guard I worked with on this case gave a good example of this. He used to work at the local adult psychiatric hospital, and told me about their "frequent fliers," folks who weren't mentally ill, but used the mental health system and the 5150 laws to secure a bed and a meal simply by calling 911 and reporting suicidal ideation. These individuals take up space designed for psychiatrically impaired adults, and cost us a fortune by doing so that could be better spent in other types of (less expensive) social services. Meanwhile, a large, large portion of the mentally ill adult population is warehoused in prisons, where they receive no treatment at all.

In my own 5150 assessments this sort of dilemma becomes apparent when a child who would, in an ideal environment, be able to go home ends up being sent to a psychiatric hospital because I don't trust their parent or their environment to keep them stabilized. In a lot of these "soft calls" the problem seems to me to be largely an issue of family, community, and environmental dynamics rather than a function of sincere mental illness, namely suicidal/homicidal ideation or being gravely disabled. Yet I send them off because, on the ethical scales, it is better to keep them in a safe warehouse for a few days than send them back to a sickness-inducing environment that cannot contain them.

I frequently wonder what it would be like to be in private practice- something I may be finding out in the not-to-distant future. I think of the multitudes of private practitioners who have never done these types of assessments or confronted these issues, but live in a narrowly confined and defined version of mental health. The idea scares me, because it makes Dr. Phil's out of my colleagues rather than psychologists with a holistic experience of mental health in its varied and extreme forms. While I am under a lot of stress, and find this job overwhelming at times, I am grateful for being forced to confront these types of ethical dilemmas; these problems belong to everyone, not just those in the mental health professions.
          Ð’идеомашина Голдберга        

Вот приходишь на смену, а дорогие коллеги, большие профессионалы, сломали IP-KVM. Ну то есть совсем сломали, выдернут из текстолита VGA-разъем видеовхода, уронили наверное. KVMпоследний и единственный, остальные уехали в другой ДЦ на международный проект, а это значит, что при любом заказе на трешовый дедик уровня core2duo придется переть из теплого офиса по улице 500 метров в модуль и сидеть там от 15 минут до часу времени, накатывая ручками всю хурму на серваки локально. В модуле холодно и шумно, и вайфай медленный, что же делать?

Надо применить инженерный подход. VGA не пашет, но команды клавиатуры с KVM на сервак передаются ок. В зипе находится длинный-предлинный vga-кабель, метров тридцать в бухте. Вешаем KVM на сервак, подключаем к серваку по VGA монитор, монитор ставим на коробку, коробку на стул, стул на стол, усиливаем все скотчем подкатываем получившуюся башенку к cctv-камере в углу комнаты, ориентируя монитором прямо в объектив. Вуаля - картинка с монитора доступна по сети, команды с клавиатуры передаются по сети тоже, пусть и по другому каналу.

Довольный собой, запускаю с pxe раскатываться на серваке образ и топаю в теплый офис, чтобы по возвращению заглянуть в cctv-монитор, а там сервер в процессе сетапа свалился в kernel panic и не отвечает на клавиатуру, окей.


          910-003357 Mouse Optical Black B100 USB        
910-003357 Mouse Optical Black B100 USB

910-003357 Mouse Optical Black B100 USB

Optical precision: 800 DPI Compatibility:Windows Vista or Windows 7, Windows 8, Windows 10,Linux kernel 2.4+,Mac OS X 10.3.9 or later,Chrome OS Corded mouse


          Open Source Security Inc. Announces World-First Fully CFI-Hardened OS Kernel        

The test patch for grsecurity® released today demonstrates a fully-featured version of RAP, a high-performance and high-security implementation of Control Flow Integrity (CFI). RAP is available commercially with a number of added benefits, including the ability to protect userland applications.

(PRWeb February 06, 2017)

Read the full story at http://www.prweb.com/releases/2017/02/prweb14044396.htm


          IOCTL Fuzzer v1.2 – Fuzzing Tool For Windows Kernel Drivers        

IOCTL Fuzzer is a tool designed to automate the task of searching vulnerabilities in Windows kernel drivers by performing fuzz tests on them. The fuzzer’s own driver hooks NtDeviceIoControlFile in order to take control of all IOCTL requests throughout the system. While processing IOCTLs, the fuzzer will spoof those IOCTLs conforming to conditions specified in […]

The post IOCTL Fuzzer v1.2 – Fuzzing Tool For Windows Kernel Drivers appeared first on Darknet - The Darkside.


          Heartbeat dan DRBD        
Dalam sebuah implementasi saya harus mengganti implementasi vrrpd (virtual router redundancy protocol) dengan heartbeat+drbd disebabkan adanya penambahan database dalam server yang digunakan. Service awal pada mesin ini hanyalah web server statis, named dan dhcpd yang relatif statis dan file-filenya saya sinkronisasi dengan rsync. Tetapi dengan adanya penambahan database (mysql) dibutuhkan sebuah mekanisme dimana data yang disimpan dalam satu mesin primary dapat secara langsung ditulis juga ke mesin backup. Untuk hal yang terakhir ini vrrpd saja tidak mencukupi karenanya saya harus mengganti vrrpd dengan heartbeat (baca hartbit, bukan hertbet :-) )sedangkan untuk menjamin mekanisme clusternya saya menggunakan drbd.

Implementasi heartbeat saja sangatlah mudah. Cukup mendownload, mengkompilasi dan mengkonfigurasi tiga buah file /etc/ha.d/ha.cf, /etc/ha.d/authkeys dan /etc/ha.d/haresources. Untuk drbd bisa download tarball dan jangan lupa untuk membaca dokumentasinya, karena drbd harus dikompilasi dengan kernel source secara baik, kalau tidak anda dapat menemui kesulitan dalam mem-probe modul drbd. Pada server ini saya menggunakan openSUSE 11.1 sehingga hidup jadi lebih mudah, tinggal gunakan 1-click install untuk heartbeat, drbd kernel module dan drbd user space, atau bisa juga dengan mengaktifkan repositori http://download.opensuse.org/repositories/server:/ha-clustering/

Konfigurasi Heartbeat

Pastikan anda menggunakan dua buah server untuk high availability cluster. Kalau hanya punya satu ya tidak perlu heartbeat dan drbd :-). Untuk penggunaan lebih dari 2 buah server sebaiknya menggunakan pacemaker dan openAIS karena dapat melakukan N-to-N atau N+1 cluster sampai jumlah yang teorithically tidak terbatas. Tetapi saya tidak akan menjelaskan pacemaker dan openAIS di sini.
Pada setiap server menggunakan dua buah ethernet card, atau bisa juga 1 ethernet card dan koneksi langsung antar kedua server dengan menggunakan null-modem cable.
Satu buah ethernet terhubung ke jaringan dan satu buah lagi sebaiknya dihubungkan antar server langsung menggunakan cross cable (tidak harus tetapi disarankan)
Pastikan ethernet bekerja dengan baik.
Pada skenario di atas eth0 real ip diset secara permanen dengan ifup, sedangkan virtual ip akan diset melalui file /etc/ha.d/haresources. Silakan ganti ip address sesuai dengan yang anda gunakan.
Konfigur file /etc/ha.d/ha.cf, /etc/ha.d/haresources, /etc/ha.d/authkeys. File-file ini harus sama di kedua server.
Contoh file ha.cf

keepalive 2
warntime 5
deadtime 15
initdead 90
udpport 694
auto_failback on
bcast eth0
node server1 server2

bcast eth0, maksudnya adalah ethernet yang akan digunakan oleh client untuk mengakses server. node, diikuti dengan nama server primary dan server secondary sesuai dengan hasil "uname -n"

Contoh file authkeys

Jika kedua server terhubung dengan kabel null-modem atau kabel cross anda dapat mengabaikan enkripsi dan mengisi file authkeys dengan misalnya:

auth 2
2 crc

Tetapi jika anda menggunakan jaringan, misalnya letak kedua server terpisah secara geografis maka penggunaan enkripsi sangat dianjurkan dengan format

auth num
num algorithm secret

Untuk membuatnya dapat gunakan script dibawah

# ( echo -ne "auth 1 1 sha1 "; dd if=/dev/urandom bs=512 count=1 | openssl md5 )  > /etc/ha.d/authkeys

Selanjutnya jangan lupa set agar authkeys hanya bisa dibaca dan ditulis oleh root # chmod 0600 /etc/ha.d/authkeys

Contoh file /etc/ha.d/haresources

Konfigurasi haresources tanpa drbd / sebelum drbd diaktifkan misalnya

    server1 IPaddr::10.8.2.100/24/eth0 named dhcpd apache2
Arti dari baris tersebut adalah:

server1 --> nama server primary sesuai "uname -n"
IPaddr::10.8.2.100/24/eth0 --> ipaddress virtual yang digunakan di eth0
named dhcpd apache2 --> nama services yang redundan
Anda dapat menset service heartbeat agar jalan di run level saat booting, misalnya dengan perintah "chkconfig heartbeat on" atau pada openSUSE dengan "insserv /etc/init.d/heartbeat". Saya sendiri di openSUSE lebih menyukai untuk menjalankannya melalui file /etc/init.d/after.local misalnya vim /etc/init.d/after.local:

#! /bin/sh
sleep 2
rcheartbeat start

Jangan lupa untuk mengcopy semua file konfigurasi yang anda buat di server1 ke server2 (gunakan scp or whatever) ha.cf, haresources, authkeys dan after.local (kalau anda pakai). Heartbeat sebenarnya menyediakan fasilitas mencopy konfigurasi dari node primary ke node cluster lainnya dengan ha_propagate. Coba cari filenya di /usr/share/heartbeat/ha_propagate atau di /usr/lib/heartbeat/ha_propagate. Saya sendiri lebih prefer menggunakan scp :-)

Dari server1 coba "ifconfig" maka kalau semuanya ok akan muncul eth0:0 dengan ip 10.8.2.100. Dari client coba ping dan ssh ip tersebut, kalau masuk ke 10.8.2.4 maka heartbeat sudah bekerja sempurna. Selanjutnya matikan service heartbeat di server1, cek dengan ifconfig bahwa eth0:0 sudah tidak ada. Masuk ke server2 dan cek dengan ifconfig, harusnya sekarang eth0:0 dengan ip 10.8.2.100 sudah diambil alih oleh server2. Untuk mengembalikan ke server1 maka aktifkan service heartbeat di server1. Kalau ini semua ok berarti service heartbeat sudah berjalan dengan sempurna. Anda dapat juga mentest dengan mematikan eth0 pada server1, dan yakinkan bahwa ip virtual eth0:0 juga diambil alih oleh server2.

Konfigurasi drbd

Drbd merupakan singkatan dari Distributed Replicated Block Device. Drbd akan me-mirror seluruh block device yang telah didefinisikan dan bekerja sebaga raid-1 over network. Konfigurasi drbd cukup mudah walaupun tidak semudah heartbeat :-P Anda butuh kesabaran. Beberapa hal yang perlu diperhatikan. User space dan kernel space harus dengan versi yang sama. Ada kejadian dimana seseorang mendownload tarball dan kemudian mengupdate instalasi drbd. Waktu menjalankan configure dia tidak mendefinisikan kernel directory, akibatnya user space drbd (misalnya drbdadm) meningkat versinya tetapi modul drbd.ko tidak terupdate. Akibatnya mesin bisa hang :-( Setidaknya dalam mengkonfigure jalankan ./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc --with-km

Selanjutnya jalankan:

# cd drbd
# make clean
# make KDIR=/path/to/kernel/source
Untuk pengguna openSUSE tidak perlu melakukan langkah-langkah ini cukup install menggunakan 1-click install seperti yang sudah saya sebutkan di awal tulisan.

Hal lain yang sering salah dilakukan waktu mengkonfigurasi drbd adalah membuat filesystem saat merepartisi disk untuk drbd device. Hal ini harus dihindari sampai modul drbd kita panggil untuk pertama kali. Berikut adalah langkah-langkahnya:

siapkan pastisi untuk /dev/drbd yang akan digunakan untuk saling bereplikasi dan biarkan partisi tanpa filesystem. Ukuran partisi akan menentukan berapa lama keduanya bersinkronisasi, makin besar ukuran partisi maka makin lama sinkronisasi mencapai kondisi Consistent. Selain itu bisa juga disiapkan satu partisi tambahan untuk metadata walaupun tidak mandatory. Ukuran partisi di kedua server haruslah sama.

Edit file /etc/drbd.conf menjadi:

# You can find an example in  /usr/share/doc/drbd.../drbd.conf.example

#include "drbd.d/global_common.conf";
#include "drbd.d/*.res";

global{
usage-count yes;
}
common{
 protocol C;
}
resource r0{
 net{
  after-sb-0pri discard-younger-primary;
  after-sb-1pri discard-secondary;
  after-sb-2pri disconnect;
 }
on server1{
 device /dev/drbd0;
 disk /dev/cciss/c0d0p6;
 address 10.8.2.4:7788;
 meta-disk internal;
}
on server2{
 device /dev/drbd0;
 disk /dev/cciss/c0d0p6;
 address 10.8.2.5:7788;
 meta-disk internal;
}
}
Pada server1 & server2 jalankan perintah:

# modprobe drbd
# drbdadm up all
# cat /proc/drbd
akan muincul tampilan dikedua server seperti:

server1:~ # cat /proc/drbd
version: 8.2.7 (api:88/proto:86-88)
GIT-hash: a1b440e8b3011a1318d8bff1bb7edc763ef995b0 build by lmb@hermes, 2009-02-20 13:35:59
 0: cs:Connected st:Secondary/Secondary ds:Inconsistent/Inconsistent C r---
    ns:45542488 nr:0 dw:0 dr:45542488 al:0 bm:2779 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

server2:~ # cat /proc/drbd
version: 8.2.7 (api:88/proto:86-88)
GIT-hash: a1b440e8b3011a1318d8bff1bb7edc763ef995b0 build by lmb@hermes, 2009-02-20 13:35:59
 0: cs:Connected st:Secondary/Secondary ds:Inconsistent/Inconsistent C r---
    ns:45542488 nr:0 dw:0 dr:45542488 al:0 bm:2779 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

Selanjutnya buatlah metadata untuk drbd di setiap server

server1:~ # drbdadm create-md r0
server1:~ # rcdrbd start
server2:~ # drbdadm create-md r0
server2:~ # rcdrbd start
Kita akan menjadikan server1 sebagai primary node, karena iotu pada server1 jalankan:

server1:~ # drbdadm  primary all
server1:~ # drbdadm connect all
Jika ada masalah, kemungkinan besar adalah karena sudah ada file system. Untuk menghapus file sistem tanpa mengubah partisi dapat menjalankan perintah

dd if=/dev/zero bs=512 count=512 of=/dev/your_partition

Bisa juga ditemukan atau adanya kesalahan saat menginisiasi drbd yang berakibat kedua disk sudah berada dalam kondisi Primary/Secondary Inconsistent/Inconsistent. Pada saat awal harusnya semua dalam kondisi Secondary/Secondary. Jika menemui masalah ini jalankan:

server1:~ # drbdadm -- --overwrite-data-of-peer primary all

Selanjutnya jalankan pada server 1

          server1:~# drbdsetup /dev/drbd0 primary --overwrite-data-of-peer

Sekarang inisial sinkronisasi akan mulai berjalan.

server1:~ # cat /proc/drbd
version: 8.2.7 (api:88/proto:86-88)
GIT-hash: a1b440e8b3011a1318d8bff1bb7edc763ef995b0 build by lmb@hermes, 2009-02-20 13:35:59
 0: cs:SyncSource st:Primary/Secondary ds:UpToDate/Inconsistent C r---
    ns:36350976 nr:0 dw:0 dr:36351244 al:0 bm:2218 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:68502008
        [=====>..............] sync'ed: 34.7% (66896/102392)M
        finish: 53:31:01 speed: 348 (320) K/sec

Prosesnya cukup memakan waktu dan bergantung dari ukuran disk yang digunakan sebagai device drbd. Bersabarlah dan menunggu sampai prosesnya selesai. Saya selalu menunggu sinkronisasi sampai selesai 100% untuk yang pertama kali sebelum melakukan apapun (walaupun tidak harus). Jika sudah selesai maka hasilnya akan seperti:

server1:~ # cat /proc/drbd
version: 8.2.7 (api:88/proto:86-88)
GIT-hash: a1b440e8b3011a1318d8bff1bb7edc763ef995b0 build by lmb@hermes, 2009-02-20 13:35:59
 0: cs:Connected st:Secondary/Secondary ds:UpToDate/UpToDate C r---
    ns:45542488 nr:0 dw:0 dr:45542488 al:0 bm:2779 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0

server2:~ # cat /proc/drbd
version: 8.2.7 (api:88/proto:86-88)
GIT-hash: a1b440e8b3011a1318d8bff1bb7edc763ef995b0 build by lmb@hermes, 2009-02-20 13:35:59
 0: cs:Connected st:Secondary/Secondary ds:UpToDate/UpToDate C r---
    ns:0 nr:44887544 dw:44887544 dr:0 al:0 bm:2740 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0 

Selanjutnya pada server1 kita akan membuat file system. Cukup dilakukan di server1, karena server2 akan mengikuti:

server1:~ # drbdadm primary all
server1:~ # mkfs.ext3 /dev/drbd0
sekarang kita siapkan directory untuk mysql di server1

mkdir /data-mysql

mount -t ext3 /dev/drbd0 /data-mysql

mv /var/lib/mysql /data-mysql

ln -s /data-mysql/mysql /var/lib/mysql

umount /data-mysql

di server2:

 mv /var/lib/mysql /tmp

 ln -s /data-mysql/mysql /var/lib/mysql

Edit file /etc/ha.d/haresources di server1 dan server2 menjadi

server1 IPaddr::10.8.2.100/24/eth0 drbddisk::r0 Filesystem::/dev/drbd0::/data-mysql::ext3 named dhcpd apache2 mysql

Selanjutnya tinggal memanggil drbd dan heartbeat di runlevel 3 dan 5 setiap kali server di boot. Saya sendiri di openSUSE mmenggunakan /etc/init.d/after.local untuk memanggil drbd dan heartbeat. Ini hanya untuk memastikan bahwa drbd dan heartbeat dipanggil terakhir kali setelah semua service yang lain berjalan. Cukup buat file /etc/init.d/after.local dan isikan misalnya:

#!/bin/sh

sleep 1
rcdrbd start
sleep 2
rcheartbeat start

Sekarang kita tinggal mengujinya. Apakah service-service yang didefinisikan di /etc/ha.d/haresources akan berpindah ke server2 jika server1 dimatikan. Tahu kan cara menguinya? Kira-kira sama dengan cara menguji heartbeat di atas.

Have a lot of fun  
          Bandwidth Shaper Script di openSUSE        


# Tulisan ini digunakan sebagai bahan dasar saja untuk memahami
# Dibuat sudah lama sekali dan mungkin ada yang deprecated
# Use at your own risk

Seorang rekan menanyakan kenapa mengkonfigurasi openSuSE koq susah sekali. Katanya untuk ngejalanin script sederhana membuat default route aja musti ngebuat script yang njlimet. He…he..he….
Dia bilang kalau di RedHat dan turunannya seperti Fedora dan Centos khan ada rc.local, terus kalau di openSUSE padanannya apa?

OK. Sebetulnya masalah ini sudah pernah saya bahas diblog saya yang dulu dan dibeberapa email saya tapi tak mengapa saya ulang lagi di sini dan saya tambahkan beberapa hal yang saya anggap relevan karena kebetulan ada juga yang nanya tentang load balancing trafik internet ke dua gateway dan implementasi htb (hierarchical tocken bucket) untuk traffic shaping.

Jadi saya akan menjelaskan implementasi load balancing, traffic shaping dan rc.local di openSuSE sekaligus, mumpung lagi ada kesempatan nulis.


LOAD BALANCING TRAFIK INTERNET

Di tempat saya koneksi internet terhubung ke dua ISP, LC 128 kbps ke ISP-A dan ADSL ke ISP-B. Singkat cerita saya menggunakan sebuah server dengan 3 ethernet card

eth0 ip address 202.158.xx.yyy netmask 255.255.255.240 gw 202.158.xx.yyy
eth1 ip address 10.0.50.5 netmask 255.255.255.248 gw 10.0.50.1
eth2 ip address 192.168.117.171 netmask 255.255.255.0 gw 192.168.117.171

Untuk load balancing trafik ini saya mengacu pada dokumen LARTC (Linux Advanced Routing & Traffic Control) How To yang disusun oleh Bert Hubert (thanks Om Bert). Syarat untuk load balancing adalah sudah terinstallnya paket iproute2, yang sudah terinstall saat saya menginstall openSUSE 10.3.

Selanjutnya

# pertama hapus semua default route dari ip route
#
ip route del default
ip route del 10.0.50.0/29
ip route del 202.158.xx.zzz/28
ip route del 169.254.0.0/16

# tambah ip route yang masuk akal
#

ip route add 10.0.50.0/29 dev eth1 proto kernel scope link src 10.0.50.5
ip route add 202.158.xx.zzz/28 dev eth0 proto kernel scope link src 202.158.xx.xxx

# tambah juga load balancing default route (ip router anda)
# weight menandakan mana yang lebih anda pilih, 2 > 1
ip route add default scope global \
nexthop via 202.158.xx.yyy dev eth0 weight 1 \
nexthop via 10.0.50.1 dev eth1 weight 2

# tambah table policy routing
#
ip route add via 202.158.xx.yyy dev eth0 src 202.158.xx.xxx table ISP-A
ip route add via 10.0.50.1 dev eth1 src 10.0.50.5 table ISP-B

# ini musti ditest sometimes we need this
ip route add 192.168.117.0/24 dev eth4 table ISP-A
ip route add 10.0.50.0/29 dev eth1 table ISP-A
ip route add 127.0.0.0/8 dev lo table ISP-A
ip route add 192.168.117.0/24 dev eth4 table ISP-B
ip route add 202.158.xx.yyy/28 dev eth0 table ISP-B
ip route add 127.0.0.0/8 dev lo table ISP-B

# jangan lupa setup dua ip rule agar sistem menggunakan policy routing diatas
#
ip rule add from 202.158.xx.xxx table ISP-A
ip rule add from 10.0.50.5 table ISP-B

# setting IP masquerade
#
iptables -t nat -A POSTROUTING -s 192.168.117.0/24 -d 0/0 -o eth0 -j MASQUERADE
iptables -t nat -A POSTROUTING -s 192.168.117.0/24 -d 0/0 -o eth1 -j MASQUERADE

# set TOS field agar router gak bingung, supaya ssh dan ftp bisa jalan

iptables -t mangle -A PREROUTING -j TOS –set-tos 0x00
iptables -t mangle -A OUTPUT -j TOS –set-tos 0x00

perlu diingat bahwa balancing disini tidak sempurna, karena route based, dan routes di-cache. Jadi route ke site yang sering dikunjungi akan selalu melaui provider yang sama.
Contoh di server ini kalau saya traceroute www.detik.com akan selalu lewat ISP-A dan traceroute www.republika.co.id akan selalu melalui ISP-B.


TRAFFIC SHAPING

Tujuan traffic shaping di sini adalah (tentu saja anda bisa melakukan shaping dengan tujuan lain he..he…he..):
- menjaga low latency untuk interactive trafic, jangan sampai upload atau download mengganggu ssh.
- menjaga agar browsing berjalan pada reasonable speeds sementara melalukan up atau downloading.
- menjaga agar upload tidak mengganggu download dan sebaliknya.
- menandakan port atau host yang sering memakan traffic sebagai low priority.

Ada banyak sumber di intenet misalnya tulisan ini dan favorit saya adalah sekali lagi om Bert dengan dokumen LARTC. Jangan lupa baca pre-requisites untuk menjalankan HTB dan pastikan kernel anda mendukungnya.

Implementasi di tempat saya sederhana saja seperti di bawah:

untuk eth0 dan eth1:

DOWNLINK=96 # untuk eth0, untuk eth1 –> DOWNLINK=288
UPLINK=80 # untuk eth0, untuk eth1 –> UPLINK=20
DEV=eth0 # ganti dengan eth1 untuk eth1

# low priority OUTGOING traffic - you can leave this blank if you want
# low priority source netmasks
NOPRIOHOSTSRC=

# low priority destination netmasks
NOPRIOHOSTDST=

# low priority source ports
NOPRIOPORTSRC=

# low priority destination ports
NOPRIOPORTDST=

if [ "$1" = "status" ]
then
tc -s qdisc ls dev $DEV
tc -s class ls dev $DEV
exit
fi

# clean existing down- and uplink qdiscs, hide errors
tc qdisc del dev $DEV root 2> /dev/null > /dev/null
tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null

if [ "$1" = "stop" ]
then
exit
fi

###### uplink

# install root HTB, point default traffic to 1:20:

tc qdisc add dev $DEV root handle 1: htb default 20

# shape everything at $UPLINK speed - this prevents huge queues in your
# DSL modem which destroy latency:

tc class add dev $DEV parent 1: classid 1:1 htb rate ${UPLINK}kbit burst 6k

# high prio class 1:10:

tc class add dev $DEV parent 1:1 classid 1:10 htb rate ${UPLINK}kbit \
burst 6k prio 1

# bulk & default class 1:20 - gets slightly less traffic,
# and a lower priority:

tc class add dev $DEV parent 1:1 classid 1:20 htb rate $[9*$UPLINK/10]kbit \
burst 6k prio 2

tc class add dev $DEV parent 1:1 classid 1:30 htb rate $[8*$UPLINK/10]kbit \
burst 6k prio 2

# all get Stochastic Fairness:
tc qdisc add dev $DEV parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev $DEV parent 1:30 handle 30: sfq perturb 10

# TOS Minimum Delay (ssh, NOT scp) in 1:10:

tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \
match ip tos 0x10 0xff flowid 1:10

# ICMP (ip protocol 1) in the interactive class 1:10 so we
# can do measurements & impress our friends:
tc filter add dev $DEV parent 1:0 protocol ip prio 10 u32 \
match ip protocol 1 0xff flowid 1:10

# To speed up downloads while an upload is going on, put ACK packets in
# the interactive class:

tc filter add dev $DEV parent 1: protocol ip prio 10 u32 \
match ip protocol 6 0xff \
match u8 0x05 0x0f at 0 \
match u16 0x0000 0xffc0 at 2 \
match u8 0x10 0xff at 33 \
flowid 1:10

# rest is ‘non-interactive’ ie ‘bulk’ and ends up in 1:20

# some traffic however suffers a worse fate
for a in $NOPRIOPORTDST
do
tc filter add dev $DEV parent 1: protocol ip prio 14 u32 \
match ip dport $a 0xffff flowid 1:30
done

for a in $NOPRIOPORTSRC
do
tc filter add dev $DEV parent 1: protocol ip prio 15 u32 \
match ip sport $a 0xffff flowid 1:30
done

for a in $NOPRIOHOSTSRC
do
tc filter add dev $DEV parent 1: protocol ip prio 16 u32 \
match ip src $a flowid 1:30
done

for a in $NOPRIOHOSTDST
do
tc filter add dev $DEV parent 1: protocol ip prio 17 u32 \
match ip dst $a flowid 1:30
done

# rest is ‘non-interactive’ ie ‘bulk’ and ends up in 1:20

tc filter add dev $DEV parent 1: protocol ip prio 18 u32 \
match ip dst 0.0.0.0/0 flowid 1:20

########## downlink #############
# slow downloads down to somewhat less than the real speed to prevent
# queuing at our ISP. Tune to see how high you can set it.
# ISPs tend to have *huge* queues to make sure big downloads are fast
#
# attach ingress policer:

tc qdisc add dev $DEV handle ffff: ingress

# filter *everything* to it (0.0.0.0/0), drop everything that’s
# coming in too fast:

tc filter add dev $DEV parent ffff: protocol ip prio 50 u32 match ip src \
0.0.0.0/0 police rate ${DOWNLINK}kbit burst 10k drop flowid :1

Script ini bekerja cukup baik pada ADSL tapi harus dicoba-coba sampai didapat nilai optimum untuk nilai di DOWNLINK dan UPLINK. Masalah umum ADSL adalah kecepatan upload yang jauh dibawah kecepatan download, dan karena sifat TCP/IP yang terus mengirim paket sampai akhirnya tidak ada tempat lagi untuk paket, biasanya modem akan hang. Dengan kecepatan dowload yang kencang biasanya user terus mendownload beberapa sites sekaligus sehingga akumulasi upload menjadi besar. Bila traffic upload ini mencapai modem ADSL maka modem akan hang.

Karena itu harus diatur agar traffic upload kita kontrol dan tidak mencapai modem ADSL hal ini dilakukan dengan menurunkan nilai UPLOAD sampai nilai optimum. Hal ini tercapai jika network latency mencapai nilai terendah dan network tidak putus. Lebih jauh silakan baca pada dokumen LARTC Om Bert di atas.


rc.local di openSUSE

Tidak ada rc.local di openSUSE (he..he…he…)
Kalau kita lihat di RedHat (dan cloningnya) rc.local dijalankan setelah semua service selesai dijalankan di run level 5. Ini gak ada padanannya di openSUSE.

User biasanya mengira boot.local di /etc/init.d adalah padanan dari rc.local. Ini adalah perkiraan yang salah karena boot.local akan jalan paling awal sebelum service-service yang lain dijalankan. Sehingga seringkali user membuat script iptables dan disisipkan pada boot.local kemudian komplain karena scriptnya tidak jalan. Ini terjadi karena script iptables dipanggil sebelum service network dikonfigurasi di run level 3 sehingga sudah pasti tidak akan berfungsi.

Di openSUSE kita harus mengetahui pada saat mana script kita harus jalan apa saja syarat yang dibutuhkan, walaupun umumnya akan jalan di run level 3 dan 5. Misalnya kita ingin menjalankan script load balancing di atas, maka sebelum script ini jalan service network harus sudah jalan dulu.

Untuk dasar dari script tersebut kita dapat menggunakan file /etc/init.d/skeleton sebagai dasar, walaupun tidak tertutup kemungkinan menggunakan script lain seperti yang akan saya contohkan.

Script untuk traffic shapper:
#!/bin/sh
#
#
# /etc/init.d/bwshaper_eth0
#
### BEGIN INIT INFO
# Provides: bwshaper_eth0
# Required-Start: $network
# Should-Start:
# Required-Stop:
# Should-Stop:
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Short-Description: Custom shapping using htb for eth0 to ISP-A
# Description: decreased the upload traffic on eth0 to ISP-A by doing queuing using htb,
# script by medwin@gmail.com
### END INIT INFO
#

test -s /etc/rc.status && . /etc/rc.status && rc_reset

case "$1" in
start )

## letakkan script and di sini

rc_status -v
;;
stop)
# ok kita test
;;

esac

rc_exit

# end of script

Script di atas hanya satu contoh sederhana saja. Perhatikan bagian:
### BEGIN INIT INFO
# Provides: bwshaper_eth0 —> ini nama service anda
# Required-Start: $network —> ini adalah service yang harus jalan sebelum script anda di jalankan
# Should-Start:
# Required-Stop:
# Should-Stop:
# Default-Start: 3 5 —> ini run level script anda bekerja
# Default-Stop: 0 1 2 6
# Short-Description: Custom shapping using htb for eth0 to ISP-A
# Description: decreased the upload traffic on eth0 to ISP-A by doing queuing using htb, script by medwin@gmail.com
### END INIT INFO
bagian ini akan dipelajari oleh insserv untuk menjalankan script anda pada run level berapa.
Copy script anda di /etc/init.d
Untuk memasukkan service anda ke service maka daftarkan dengan perintah

> insserv (nama service)

kemudian check dengan

> chkconfig –list

untuk mengetahui kalau service anda sudah masuk ke daftar service di run level tertentu.

Anda juga bisa berkreasi dengan membuat script service yang bisa dijalankan dan diberhentikan seperti misalnya dengan menyisipkan

case "$1" in

start)

echo -n "Starting bandwidth shaping on eth0: "
start
echo "done"
;;

stop)

echo -n "Stopping bandwidth shaping on eth0: "
stop
echo "done"
;;

restart)

echo -n "Restarting bandwidth shaping on eth0: "
restart
echo "done"
;;

status)

echo "Bandwidth shaping status for $IF:"
show
echo ""
;;

*)

pwd=$(pwd)
echo "Usage: tc.bash {start|stop|restart|status}"
;;

pada script anda. Kemudian melakukan symbolic link file tersebut ke /usr/sbin atau /sbin, misalnya dengan nama rcbwshaper_eth0

> ln-s /etc/init.d/bwshaper_eth0 /usr/sbin/rcbwshaper_eth0

sehingga anda bisa memanggilnya dengan perintah

> rcbwshaper_eth0 {start|stop|restart|status}

OK selamat mencoba.
Till then keep safe and stop global warming.
          Traffic Shaping - Bagian 2        
Pada bagian ini kita akan mendiskusikan bagaimana mengklasifikasikan paket dan kemudian melakukan penandaan paket (packet marking) berdasarkan TOS field paket di linux kernel. Jadi kita akan menyerahkan klasifikasi paket untuk dilakukan oleh iptables selanjutnya HTB akan melakukan queueing berdasarkan penandaan oleh iptables. Secara singkat TOS (Type of Service, kudu dimengerti oleh pengguna linux yang berminat pada networking dan Quality of Service) merupakan bagian dari paket yang menentukan prioritas dari paket. TOS terdiri dari 8 bit (octet), bit 0, 1, 2 adalah precedence, bit 3, 4, 5, 6 adalah TOS, dan bit 7 adalah bit MBZ (Must Be Zero).


Secara default nilai dari TOS bits adalah sebagai berikut:
  • 1000 (binary)      8 (decimal)       Minimize delay (md)
  • 0100 (binary)      4 (decimal)       Maximize throughput (mt)
  • 0010 (binary)      2 (decimal)       Maximize reliability (mr)
  • 0001 (binary)      1 (decimal)       Minimize monetary cost (mmc)
  • 0000 (binary)      0 (decimal)       Normal service
Untuk mengetahui lebih jauh tentang TOS silakan membaca RFC1349 dan RFC2474.

Dengan iptables kita dapat melakukan penandaan paket (packet marking) berdasarkan TOS bits dan inilah yang akan kita lakukan dengan script yang kita buat. Header dari paket akan dibongkar (mangle) oleh iptables dan disisipi tanda (mark) sesuai keinginan kita. (Thanks to Rusty Russel, Harald Welte, Patrick McHardy etc to make iptables as a nice userland for linux communites. Sekitar 2 tahun lalu Tahun 2006 kebetulan saya pernah kerja bareng dengan salah satu kontributor iptables/netfilter Fabrice Marie, dia salah satu pembuat howto nya netfilter, orangnya sangat down to earth, ramah dan mau berbagi ilmu. Saat itu saya gak tahu kalau dia salah satu kontributornya……..)

Pada script yang saya berikan (pada tulisan sebelumnya) perhatikan bagian
tc filter add dev eth1 parent 1:0 protocol ip prio 1 handle 1 fw classid 1:10
tc filter add dev eth1 parent 1:0 protocol ip prio 2 handle 2 fw classid 1:11
tc filter add dev eth1 parent 1:0 protocol ip prio 3 handle 3 fw classid 1:12
tc filter add dev eth1 parent 1:0 protocol ip prio 4 handle 4 fw classid 1:13
tc filter add dev eth1 parent 1:0 protocol ip prio 5 handle 5 fw classid 1:14
tc filter add dev eth1 parent 1:0 protocol ip prio 6 handle 6 fw classid 1:15
Pada tulisan sebelumnya kita sudah membuat 6 class htb qdisc tetapi belum melakukan klasifikasi paket, sehingga seluruh paket upload dari network kita akan melalui class 1:15 (kita mendefinisikan tc qdisc add dev eth1 root handle 1: htb default 15). Sekarang kita harus mengklasifikasikan paket agar paket tertentu akan masuk kedalam class htb qdisc tertentu pula. Script di atas adalah filter yang akan membagi paket kedalam class tertentu berdasarkan klasifikasi paket oleh iptables. Penggunaan iptables sangat dianjurkan karena sangat fleksibel, menghitung paket untuk setiap rule dengan cepat, dan juga dengan adanya RETURN target paket tidak perlu menjelajah ke semua rule.

Perintah yang dilakukan pada script di atas adalah memberitahu kernel bahwa paket dengan nilai spesifik FWMARK (handle x fw) harus masuk ke class tertentu (classid x:xy).

Bagi anda yang belum memahami cara kerja iptables silakan download howtonya di sini, atau setidaknya pahami diagram dari Jan Engelhardt (jengelh adalah pengguna openSUSE, dia salah satu kontributor di openSUSE Build Service).

Misalkan ip lokal anda 192.168.0.0/24 dan ip public anda 202.170.1.2, maka jalankan NAT dengan iptables (untuk pengguna SuSEfirewall tidak perlu menjalankan perintah iptables ini, tetapi ikuti langkah untuk SuSEfirewall di paragraf berikutnya. Saya pengguna SuSEfirewall juga).
  • ech0 1 > /proc/sys/net/ipv4/ip_forward
  • iptables – t nat -A POSTROUTING -s 192.168.0.0/255.255.255.0 -o eth1 -j SNAT –to-source 202.170.1.2
Untuk pengguna SuSEfirewall, buka file /etc/sysconfig/SuSEfirewall2 dan lengkapi bagian di bawah ini:
FW_DEV_EXT=’eth1′       ——> sesuaikan dengan eth ip publik
FW_DEV_INT=’eth2′        ——> sesuaikan dengan eth ip lokal
FW_ROUTE=”yes”
FW_MASQUERADE=”yes”
FW_MASQ_DEV=”zone:ext”
FW_MASQ_NETS=”192.168.0.0/24″
FW_CUSTOMRULES=”/etc/sysconfig/scripts/SuSEfirewall2-custom”
Kemudian mulailah menambahkan rule untuk PREROUTING chain pada tabel mangle:
iptables -t mangle -A PREROUTING -p icmp -j MARK –set-mark 0×1
iptables -t mangle -A PREROUTING -p icmp -j RETURN
iptables -t mangle -A PREROUTING -m tos –tos Minimize-Delay -j MARK –set-mark 0×1
iptables -t mangle -A PREROUTING -m tos –tos Minimize-Delay -j RETURN
iptables -t mangle -A PREROUTING -m tos –tos Minimize-Cost -j MARK –set-mark 0×5
iptables -t mangle -A PREROUTING -m tos –tos Minimize-Cost -j RETURN
iptables -t mangle -A PREROUTING -m tos –tos Maximize-Throughput -j MARK –set-mark 0×6
iptables -t mangle -A PREROUTING -m tos –tos Maximize-Throughput -j RETURN
iptables -t mangle -A PREROUTING -p tcp -m tcp –sport 22 -j MARK –set-mark 0×1
iptables -t mangle -A PREROUTING -p tcp -m tcp –sport 22 -j RETURN
iptables -t mangle -A PREROUTING -p tcp -m tcp –dport 22 -j MARK –set-mark 0×1
iptables -t mangle -A PREROUTING -p tcp -m tcp –dport 22 -j RETURN
iptables -t mangle -I PREROUTING -p tcp -m tcp –tcp-flags SYN,RST,ACK SYN -j MARK –set-mark 0×1
iptables -t mangle -I PREROUTING -p tcp -m tcp –tcp-flags SYN,RST,ACK SYN -j RETURN
iptables -t mangle -A PREROUTING -p tcp -m tcp –dport 587 -j MARK –set-mark 0×5
iptables -t mangle -A PREROUTING -p tcp -m tcp –dport 587 -j RETURN
iptables -t mangle -A PREROUTING -p tcp -m tcp –dport 993 -j MARK –set-mark 0×5
iptables -t mangle -A PREROUTING -p tcp -m tcp –dport 993 -j RETURN
iptables -t mangle -A PREROUTING -j MARK –set-mark 0×6
Maksud dari script di atas adalah:
  1. menandai traffic ICMP dengan FWMARK 0×1
  2. -j RETURN untuk trafik ICMP dimana ICMP tidak akan masuk ke rule lain dibawahnya
  3. menandai semua trafik TOS minimize delay sebagai FWMARK 0×1
  4. -j RETURN untuk trafik TOS minimize delay, dimana trafik TOS minimize delay tidak akan masuk ke rule lain dibawahnya
  5. menandai semua trafik TOS minimize cost sebagai FWMARK 0×5
  6. -j RETURN untuk trafik TOS minimize cost, dimana trafik TOS minimize cost tidak akan masuk ke rule lain dibawahnya
  7. menandai semua trafik TOS maximize throughput sebagai FWMARK 0×6
  8. -j RETURN untuk trafik TOS maximize throughput, dimana trafik TOS maximize throughput tidak akan masuk ke rule lain dibawahnya
  9. menandai trafik yang berasal dari port SSH dengan FWMARK  0×1
  10. -j RETURN untuk trafik yang berasal dari port SSH dimana trafik yang berasal dari port SSH tidak akan masuk ke rule lain dibawahnya
  11. menandai trafik yang menuju port SSH dengan FWMARK  0×1
  12. -j RETURN untuk trafik yang menuju port SSH dimana trafik yang menuju port SSH tidak akan masuk ke rule lain dibawahnya
  13. menandai trafik yang memiliki SYN flag dengan FWMARK  0×1
  14. -j RETURN untuk trafik yang memilik SYN flag dimana trafik yang memiliki SYN flag tidak akan masuk ke rule lain dibawahnya
  15. menandai trafik yang menuju port 587 dengan FWMARK 0×5
  16. -j RETURN untuk trafik yang menuju port 587 dimana trafik yang menuju port 587 tidak akan masuk ke rule lain dibawahnya
  17. menandai trafik yang menuju port 993 dengan FWMARK 0×5
  18. -j RETURN untuk trafik yang menuju port 993 dimana trafik yang menuju port 993 tidak akan masuk ke rule lain dibawahnya
  19. trafik yang tidak termasuk dalam klasifikasi sebelumnya akan ditandai dengan FWMARK 0×6 dan akan masuk ke class 1:15
Kemudian lakukan hal yang sama untuk OUTPUT chain. Ulangi script tabel mangle untuk PREROUTING, dan ganti semua kata PREROUTING dengan OUTPUT. Kegunaannya adalah agar semua trafik yang dihasilkan secara lokal di server tempat script ini terletak juga akan diklasifikasi. Tetapi bagian paling akhir dari script diganti dengan: iptables -t mangle -A OUTPUT -j MARK –set-mark 0×3. Hal ini membuat lokal trafik akan mempunyai prioritas lebih tinggi dan akan masuk ke class 1:12.

Masukan script OUTPUT chain dan PREROUTING chain dalam iptables script yang selama ini anda gunakan. Untuk pengguna SuSEfirewall, edit file /etc/sysconfig/scripts/SuSEfirewall2-custom, dan masukkan script tersebut pada bagian before antispoofing seperti dibawah ini

fw_custom_before_antispoofing(){
iptables -t mangle -A PREROUTING -p icmp -j MARK –set-mark 0×1
iptables -t mangle -A PREROUTING -p icmp -j RETURN
…….. dan seterusnya
iptables -t mangle -A PREROUTING -j MARK –set-mark 0×6
iptables -t mangle -A  OUTPUT -p icmp -j MARK –set-mark 0×1
iptables -t mangle -A OUTPUT -p icmp -j RETURN
…….. dan seterusnya
iptables -t mangle -A OUTPUT -j MARK –set-mark 0×3
true
}
Jalankan script yang saya berikan dan restart SuSEfirewall atau iptables, dan coba jalankan perintah :
tc -s class show dev eth1
Sekarang perhatikan bahwa jumlah paket akan meningkat di setiap class. Jika ada class yang kosong berarti anda musti mengatur ulang priority atau FWMARK yang diberikan, karena hal ini berbeda disetiap network tergantung dari karakteristik pengunaan network oleh user. Selain itu sekiranya ada class yang penuh terus, maka perlu ditambahkan queuing dicipline lain supaya pembagian bandwidth lebih fair. Hal ini dilakukan dengan sfq (stochastic fairness queueing). Pada contoh script saya tambahkan class sebagai berikut:
tc qdisc add dev eth1 parent 1:12 handle 120: sfq perturb 10
tc qdisc add dev eth1 parent 1:13 handle 130: sfq perturb 10
tc qdisc add dev eth1 parent 1:14 handle 140: sfq perturb 10
tc qdisc add dev eth1 parent 1:15 handle 150: sfq perturb 10
Maksudnya adalah menambahkan queueing disc sfq pada class 1:12 (dan seterusnya) dengan nama handle 120 (dan seterusnya) dengan hashing dilakukan setiap 10 detik. SFQ akan mengatur bandwidth dibagi secara fair untuk setiap paket trafik. Untuk kasus di tempat anda mungkin berbeda tetapi script ini dapat dijadikan dasar untuk anda mengkonfigurasi di network anda.

Mudah-mudahan penjelasan singkat ini bisa dimengerti. Pada tulisan berikutnya akan saya jelaskan bagian script yang lain.

          The Willits News - All-day Harvest Festival on October 21        
This is something that is going on in my little rural town next Saturday.
The Willits News - All-day Harvest Festival on October 21
I am volunteering two dishes for the 100 mile dinner.
What is really interesting to me is how very challenging it is to make a meal in which everything in it comes from within 100 miles. We will have a table with dishes; soups, salads, main dishes, vegetable side dishes and desserts that come from within 100 miles, sans an ingredient or two. In front of each dish will be little placards that name the things in them that do NOT come from within 100 miles, like salt, flour, breadcrumbs. Here are my dishes, with the not within 100 miles things bolded in the recipes:

Corn, Zucchini and Tomato Pie
(Posted by Caraflora on the Weight Watcher's Veggieboard, July 2004)
3 cups fresh or frozen and defrosted corn kernels
5 small zucchini, cut into matchstick pieces
2 tsp salt
1-3 T fresh dill weed
1 T olive oil
4 ripe tomatoes, cut into 1/2 inch slices
1/2 cup grated or shredded Parmesan cheese
1/4 cup dry breadcrumbs

Preheat the oven to 375 degrees. In a 13 x 9" ovenproof baking dish, combine the corn, zucchini, 1 tsp of salt, the dill and 1 T of olive oil, tossing to coat the vegetables. Cover the vegetables with the tomato slices. Sprinkle with remaining salt.

In a small bown, combine the cheese and bread crumbs. Sprinkle the mixture over the tomatoes and mist the top with olive oil, using a Misto. Bake the pie for 30 -50 minutes until the tomatoes are soft and starting to carmelize and cheese is bubbling.
Remove from the oven and let stand for 5 minutes before serving.

Provencal Butternut Squash Gratin
(adapted from The Mediterranean Vegan Kitchen, adapted to omit nutmeg and pepper which are not locally grown. The recipe has been doubled from how it was written in the book.)
2 butternut squashes (about 6 lbs), peeled, seeds and membranes removed, coarsely chopped
2 cups packed fresh parsley chopped (I will use 1 cup dried parsley)
6 large cloves of garlic finely chopped
1/2 cup all purpose flour
1/2 tsp ground sage (I will use fresh, finely minced)
salt
1/2 cup vegetable broth (made with collected veggie trimmings)
4 T extra virgin olive oil

Preheat the oven to 35o degrees. Lightly oil a 9x13" baking dish or 5 qt gratin dish.

In a large bowl, combine the squash, parsley and garlic. Sprinkle with the flour, sage and salt; toss well to combine. Add the broth and 2T of the oil; stir well to combine. Transfer to the prepared baking dish and drizzle with the remaining oil.

Bake for about 1 hour, stirring halfway through the cooking time, or until the top is nicely browned and the squash is meltingly tender.


I am very excited to live in a town that has a whole, vibrant organization in place to create a sustainable Willits.
I will report back next Sunday with pictures and a story.
          The Ancient Chemistry Inside Your Taco        

When you bite into a taco, quesadilla, or anything else involving a traditionally made corn tortilla, your taste buds get to experience the results of an ancient chemical process called nixtamalization. The technique dates to around 1500 BCE and involves cooking corn kernels with an alkaline substance, like lime or wood ash, which makes the dough softer, tastier, and much more nutritious.

Only in the 20th century did scientists figure out the secret of nixtamalization—the process releases niacin, one of the essential B vitamins. Our guest, archaeologist and nixtamalization expert Rachel Briggs, says that the historical chemical process transformed corn from a regular food into a viable dietary staple, one that cultures around the world continue to rely on for many of their calories. Without nixtamalization Mesoamerican civilizations like the Maya and the Aztec would not have survived, let alone flourished.

Benjamin Miller and Christina Martinez are the only chefs in Philadelphia making their tortillas from scratch. Our associate producer, Rigoberto Hernandez, visited the couple at their traditional Mexican restaurant in South Philadelphia to find out why they’re so dedicated to handmade tortillas—and to see the nixtamalization process in action.

 

Credits:

Hosts: Michal Meyer and Bob Kenworthy
Guest: Rachel Briggs
Reporter: Rigoberto Hernandez
Producer: Mariel Carr
Associate Producer:
 Rigoberto Hernandez

Music courtesy of the Audio Network


          Using Essential Oils Aromatherapy Carrier Oils        

Carrier oils are the basis of aromatherapy blends; carrier oils help to blend at the same time utter and capricious essential oils and consequently be a safer aromatic mix.
Into the practice of aromatherapy, carrier oils are often design of as secondary to essential oils; in piece of evidence, carrier oils are the primary basis of aromatherapy blends and are looked-for to effectively, and safely, exercise the majority of essential oils. Carrier oils take part in many properties in their own fine, in addition to the essential smear with oil properties in an aromatherapy blend.


What are Carrier Oils?
Into aromatherapy, the a large amount conventional carrier oils are vegetable oils; however, heart gel, cream, distilled run, bubble bath, shampoo, honey and milk can as well be used as carriers in aromatherapy, depending on the blend and method of product. Vegetable oils used in aromatherapy are completely various from persons used designed for cooking and the two must in no way be substituted designed for every other.


The Different Types of Carrier Vegetable Oils in Aromatherapy Cold pushed vegetable oils are the preferred carrier smear with oil designed for aromatherapy exercise; a spicy pushed carrier vegetable smear with oil will not contain the same remedial properties of a cold pushed carrier vegetable smear with oil, due to the dispensation methods used. The early dispensation of a carrier smear with oil will dictate the real remedial properties it will wait in the ending. Carrier vegetable oils can be defined as follows:


basic carrier oil – the standard
fixed carrier oil – a 'combination' of specialist oils (due to high pricing or extraction difficulties to be used on its own)
macerated carrier oil – the basic carrier oil combined with some plant parts to obtain additional properties.
Common Carrier Vegetable Oils Used in Aromatherapy
There are a wide variety of vegetable oils which are used as a carrier oil in aromatherapy; each carrier oil possesses its own therapeutic properties. Some of the more popular carrier vegetable oils include:
sweet almond oil (Prunus dulcis) – useful for soothing skin inflammation, eczema, sunburn, dry skin and to soften skin
apricot kernel oil (Prunus armeniaca) – useful for sensitive skin, mature skin and skin nourishment; similar to sweet almond oil
calendula oil (Calendula officinalis) – a macerated oil useful for inflammation, bruising, rashes, eczema and varicose veins
jojoba oil (Simmondsia chinensis) – a 'wax' more than an oil' which is useful for dry skin, psoriasis, eczema, sunburn, arthritis and rheumatism.
sunflower oil ( Helianthus annuus) – useful for bruises, skin diseases and asthma.
Basic carrier smear with oil – the standard

Other Carrier Oils Used in Aromatherapy Other carrier oils used in aromatherapy (but not exhaustive) include:
Avocado (Persea gratissima Caertn.)
Borage (Borago officinalis L.)
Carrot (Daucus carota)
Cocoa butter (Theobroma cacao)
Coconut (Cocos nucifera L.)
Sundown primrose (Oenothera biennis)
Grapeseed (Vitis vinifera)
Macadamia (Macadamia ternifolia)
Olive (Olea europaea)
Palm kernel (Elaeis guineensis)
St john's wort (Hypericum perforatum)
Walnut (Juglans regia).


How Carrier Oils Work Carrier oils access the body in much the same way as essential oils access the body; up to the ending of the 19th century, it was assumed with the intention of the skin may possibly not absorb soluble solutions, such as carrier oils (studies such as Fleischer 1877 concluded this). However, various studies in the 20th century (including with the intention of of Valette and Sorbin 1963) take part in concluded with the intention of carrier oils can be absorbed by the skin and as follows be remedial to the body.


Carrier Oils in Aromatherapy
Understanding the exercise of carrier oils in aromatherapy is essential to making booming aromatherapy blends and the remedial property a carrier smear with oil may possibly take part in; combined with the properties of various essential oils, carrier oils can be used effectively to relieve a great amount of healthiness difficulties. However, as is the casing as soon as using every essential oils or carrier oils, or if unfamiliar in the practice of aromatherapy, expert advice must be taken.


          Tips How Can Aromatherapy Reduce Stress        

Aromatherapy is a practice with the purpose of has been using around a centuries and is in advance more and more popularity in our society in the present day. Put in a very minimal and calm to understand way, aromatherapy is the practice of using essential oils and express scents, such as lavender, vanilla, jasmine and lemon to increase one’s mood, overall healthiness as well as to reduce lofty levels of stress.

Is Easy With These Tips. Get Advice on Reducing and Managing Stress.

Sense of Smell

Our intelligence of smell is very well industrial, not to bring up strong. Scents hold a way of working on the brain to stimulate recall, relax, energize and give somebody no option but to a person feel fair plain on cloud nine to be alive and kicking! In the role of an case, lavender is an herb with a very reassuring whiff often used to help fill fall napping. Arrived days vanished by lavender was sewn into pillows to help support relaxation and it was and speckled on handkerchiefs to sniff whenever you like a person felt jumpy or tense. A run to of products pro babies, such as powders, lotions and oils contain the ingredient lavender.

Soothing to the System

Whether it really is soothing to the jumpy classification or installation as of the power of air, not a soul really knows pro solid. But aromatherapy does make the trick and is fast attractive more and more known as a viable method of reassuring down and falling a violent stress load.


How can Aromatherapy Reduce Stress?

Pour a little drops of an essential grease of your variety (I commend lavender) in a bath and after that take a long marinate. Ahhh. Both the enchanting whiff and the luxurious marinate will make your body and mind a weighty deal of able.Just relax your mind and feel individuals fears melting away!

When you’re feeling stressed and nothing in addition seems to help, sniff a reassuring whiff such as lavender, rose orsandalwood in the same way with the purpose of you would breathe in a whiff of a further perfume.

Here’s a able solitary to try, handed down to me by my kind grandmother- advantage aromatherapy as a deodorizer pro a scope. Here’s how to make it: Pour a small amount of vanilla into a pan of dampen and after that simmer it on the stove. Whatever you make, make not allow the pan to boil dry. The smell it gives sour will delight you and give somebody no option but to your home town smell amazing!

Aromatherapy and massage were made pro solitary a new. Go to the lead and give somebody no option but to your own massage grease. It’s minimal to make, fair add a little drops of your favorite grease to an unscented grease such as almond and after that reap the reimbursement. Ahhh! I contract you will not be disappointed with the findings!

Word of Caution

These are fair a little ideas pro incorporating aromatherapy into your life to diminish stress. A word of caution though-it’s not a able design to advantage essential oils sated strength on your skin, as they can be highly irritating. Insteaddilute them to start with with a carrier grease. One of the paramount to try is almond. For individuals unsure as to I begyour pardon? A carrier grease is, assent to me explain.

Carrier Oils

Carrier oils, and referred to as basis oils or vegetable oils, are used to dilute essential oils facing they are functional to skin. They “carry” the essential grease on top of the skin. Different carrier oils offer poles apart properties and the variety of carrier grease can depend on the restorative benefit being sought. Carrier oils are by and large cold-pressed vegetable oils taken from the greasy portions of the deposit. Carrier oils make not evaporate or report their aroma as strongly as essential oils make. Examples of carrier oils are sweet almond, avocado, grape seed, apricot kernel, peanut,olive, pecan, sesame, macadamia nut, dusk primrose, walnut and wheat kernel.

With This Tips we hoping you will find out How can Aromatherapy Reduce Stress bookmark this blog now for more Aromatherapy courses
          Setting up Kdump and Crash for ARM-32 – an Ongoing Saga        
Learn how to setup the sophisticated kdump-crashkernel (kernel dump) debug mechanism for a modern Linux kernel running on an (emulated) ARM-32 (the Vexpress-Cortex-A9 board).
          Linux Kernel Version Timeline        
I wanted to quickly look up Linux kernel release dates by version number. All the info is on kernelnewbies.org . I’ve just copied it below… Click on the version # links (below) to see details of that version (redirects to the kernelnewbies website). Source: http://kernelnewbies.org/LinuxVersions Last Updated: 01 Apr 2016 4.x Linux 4.5 Released 13 March, 2016 … Continue reading Linux Kernel Version Timeline
          RE: I don't trust Canonical        
You've got it wrong, by comparing Canonical/Ubuntu and Linux Mint. Linux Mint as I see it is just a Linux desktop for linux users who want Linux desktops, with no vision of having to provide smartphones or tablet experience. While Canonical develops Unity in the hope that with _that_ interface they can provide a cross-device UX in accordance to their company vision(go figure what this vision is and compare that to the content of your post.) Canonical's goal I think is not to go to top1 as a contributor to the Linux kernel. So your post like any other who bash Canonical is highly unfounded and based on poor discernment of reality.
          Beyond Android: Our first look at Google Fuchsia        
Today we’re having our first look at the project code-named Google Fuchsia – a mobile OS that departs from Android and Chrome. While Google’s Chrome OS and Android OS were both based on Linux, Fuchsia is not. Fuchsia OS is based on Google’s own “Magenta” microkernel, not to be confused with the Google AI music project of the same name. … Continue reading
          Arch Linux :: RE: Как поставить арч..        
Автор: chip
Добавлено: Сб Авг 05, 2017 6:38 am (GMT 0)

Светозар писал(а):
Захожу на форум спустя 7 лет.. Стыд то какой....Какие же вопросы я задавал.

[svetozar@E1]: ~>$ screenfetch -n
svetozar@E1
OS: Arch Linux
Kernel: x86_64 Linux 4.12.3-1-ARCH
Uptime: 3h 40m
Packages: 783
Shell: bash
Resolution: 2966x900
DE: Cinnamon 3.4.4
WM: Muffin
WM Theme: Tyr jord (Windows 10)
GTK Theme: Windows 10 [GTK2/3]
Icon Theme: Adwaita
Font: Serif 9
CPU: Intel Pentium B960 @ 2x 2.2GHz [59.0°C]
GPU: intel
RAM: 2892MiB / 7805MiB


Арч очень клевая система, но то столько времени уделять десктопной лошадке называется не как иначе как дрочиловом для школоты. Убунта рулит.
_________________


          Arch Linux :: RE: Как поставить арч..        
Автор: Светозар
Добавлено: Пт Авг 04, 2017 9:30 am (GMT 0)

Захожу на форум спустя 7 лет.. Стыд то какой....Какие же вопросы я задавал.

[svetozar@E1]: ~>$ screenfetch -n
svetozar@E1
OS: Arch Linux
Kernel: x86_64 Linux 4.12.3-1-ARCH
Uptime: 3h 40m
Packages: 783
Shell: bash
Resolution: 2966x900
DE: Cinnamon 3.4.4
WM: Muffin
WM Theme: Tyr jord (Windows 10)
GTK Theme: Windows 10 [GTK2/3]
Icon Theme: Adwaita
Font: Serif 9
CPU: Intel Pentium B960 @ 2x 2.2GHz [59.0°C]
GPU: intel
RAM: 2892MiB / 7805MiB


          Etika instruksi pada proses pengeksekusian         


Mekanisme Eksekusi Instruksi
Fungsi utama komputer adalah mengeksekusi program. Berdasarkan konsep program tersimpan, program yang dieksekusi (kumpulan instruksi) di memori. Pemroses melakukan tugasnya dengan mengeksekusi instruksi di program.
Tahap pemrosesan instruksi ini berisi dua tahap, yaitu:
1.     Pemroses membaca instruksi dari memori (fetch)
2.     Pemroses mengeksekusi instruksi dari memori (execute)

Mode Eksekusi instruksi

Pemroses mempunyai beragam mode eksekusi, biasanya dikalikan dengan kewenangan yaitu:
·      Program bagian dari sistem operasi
·      Program pemakai

Instruksi-instruksi tertentu hanya dapat dieksekusi di mode berkewenangan tinggi. Instruksi-instruksi yang memerlukan kewenangan tinggi, misalnya:
·         Membaca atau memodifikasi register kendali (bit-bit register PSW)
·         Instruksi-instruksi primitif perangkat masukan/keluaran
·         Instruksi-instruksi untuk manajemen memori
·         Bagian memori tertentu hanya dapat diakses dalam mode kewenangan tinggi
Mode Pemakai dan Mode Sistem
Mode dengan kewenangan rendah disebut mode pemakai (user mode) karena program pemakai (aplikasi) biasa dieksekusi dalam mode ini.
Mode dengan kewenangan tinggi disebut:
ü  Mode system (system mode), atau
ü  Mode kendali (Control mode), atau
ü  Mode supervisor (Supervisor mode), atau
ü  Mode kernel (kernel mode).

Biasanya rutin sistem atau kendali atau kernel dieksekusi dengan mode ini.
Alasan adanya dua mode adalah untuk menjaga keamanan. Tabel sistem operasi, seperti tabel proses (PCB) harus dicegah dari intervensi program pemakai. Modifikasi table proses hanya dapat dilakukan di mode system . Program pemakai bermode pemakai takkan mampu mengubah table proses sehingga tidak merusak system.

Pada mode kernel, perangkat lunak mempunyai kendali penuh terhadap pemroses, instruksi, register dan memori. Tingkat kendali ini tidak tersedia bagi program pemakai sehingga sistem operasi tidak dapat diintervensi program pemakai. Pencegahan ini menghindari kekacauan.
Pemroses mengetahui mode eksekusi dari bit di PSW. Terdapat bitdi PSW yang menyatakan mode eksekusi.

Bila program pemakai meminta layanan system operasi dengan mengambil system call, pemanggilan system call menyababkan trap. Sistem mengubah mode eksekusi menjadi mode kernel. Di mode kernel, system operasi memenuhi yang diminta program pemakai. Begitu selesai, sistem operasi segera mengubah mode menjadi mode pemakai dan mengembalikan kendali program pemakai.

Dengan dua mode dan teknik penjebakan (trap) diperoleh manfaat:
1.     Mencegah program pemakai mengacau table-tabel sistem operasi
2.     Mencegah program pemakai mengacau mekanisme pengendalian sistem operasi.

Sumber :
http://seorangteknikinformatika.blogspot.com/2010/12/sistem-komputer.html

          ÐœÐ°ÑÑÐ°Ð¶Ð½Ð°Ñ свеча Fifty Shades of Grey, Massage Me Massage Candle, 192g        
Зажгите массажную свечу FSoG Massage Me из коллекции Fifty Shades of Grey - 50 Оттенков Серого, наполните свою комнату эротическим ароматом, создающим атмосферу для чувственных удовольствий. Обогащенная кокосовым маслом и маслом жожоба, массажная свеча может быть использована в качестве ароматической свечки или массажного масла.Увлажняющий и питательный, расплавленный воск массажной свечи, нежно ухаживает за кожей и насыщает ароматами.Погрузитесь в чувственное удовольствие эротического массажа, с мягким теплым воском и нежными шаловливыми движениями, испытайте непередаваемое удовольствие близости.Состав: Hydrogenated Vegetable Oil, Glycine Soja (Soybean) Oil, Parfum,Benzyl Benzoate, Cannabis Sativa (Hemp) Seed Oil, Persea Gratissima (Avocado) Oil, Prunus Armeniaca (Apricot) Kernel Oil, Geraniol, Citronellol, Citral (Neral + Geranial), Evernia Prunastri (Oakmoss), Coumarin, Linalool, Alpha Iso Methyl Ionone, Limonene,Lily Aldehyde (Butylphenyl Methylpropional), Tocopherol, Cocos Nucifera (Coconut) Oil, Simmondsia Chinensis (Jojoba) Seed Oil Производитель: LoveHoney Для кого: Для пары Цвет: серый Объем: 192 г
          Comment on Gigabyte Z170 HD3 with Linux Debian 8.2 Jessie by vi3ual        
Hello, numberformat ) First of all, thanks for the article! I just ordered a similar configuration for a new server, but your freezing report worries me a little.. Do you think it has to do with audio, right? Did you have any issues with RAID or network-card? Was any dmraid or mdadm kernel boot parameter required? I'm going to do a remote installation, so I need to be prepared, so I'd very appreciate your clarifying this. Thanks, Paul
          Comment on Apple Magic Mouse Works with Ubuntu! by Horacio DM        
hi! now I'm using Ubuntu 11.10, kernel 3.0.0.14, yesterday I installed my Apple Magic Mouse, I just switch on the mouse and my laptop findit quick! today it does not work!! I uninstallit, shot down the machine, then switch on but nothing hapen.. in windows 7 I have only to click on the mouse 3 times and it works... What else can I do? can you help me? ( excuse me my english, I'm from Venezuela) thanks a lot..
          Comment on Ubuntu Printer installation of Canon MF4150 by Ben        
I am new to Ubuntu and have tried this method , but when I try to print a test page it gives me a print error: Print Error There was a problem printing document 'Test Page' (job 1); 'Stopping job because the scheduler could not execute a filter.'. The options are 'Diagnose' which does not achieve any useful results and 'Cancel'. I am running Ubuntu 11.10 (oneiric), Kernel Linux 3.0.0-13-generic, GNOME 3.2.1 on a T400s Thinkpad. Could anyone give me some help? Thanks in advance!
          Teach you how to identify genuine 4G phone?        

  mobile phone parts

  To choose on ebay or other online platform to buy 4 g phone friend, must pay attention!Because of his careful, may buy fake mobile phones, not only spent money, but also caused a potential threat to its security!

  So what's the harm specific false 4 g phones are?

  The main four points:

  1: false 4 g phones self-assembly is bad more manufacturers, belong to 3 without the product, it is difficult to get quality assurance.

  2: false 4 g phone batteries use inferior mobile phone parts, more risk of explosion, there are serious security hidden danger.

  3: the fake 4 g phone electromagnetic radiation does not conform to the rules, will be detrimental to the health of the user.

  Teach you how to discern between true and false of 4 g mobile phone?Two small trick:

  1: to strip.Pull open the phone back cover, take out the battery see stick inside the factory marked off 4 g phones factory is labeled "GSM digital mobile telephone", as the regular 4 g phones factory labeled "td-scdma LTE digital mobile telephone", GSM 2 g network.

  2: with the help of external software.Now there are a lot of mobile phones security software can be read into the kernel of information, with the aid of such third party software, also distinguish authenticity, identify standard is still see if mobile phone support TD - LTE networks.

  Ok, in ordinary life, if you want to buy 4 g phones online, to ensure that your purchase is true, don't copy according to the two methods to do!


          Le stack grouping permet de contourner les mécanismes de protection du kernel Linux        
Secuobs.com : 13/04/2011 - Ludovic Blin - secuobs : Le stack grouping permet de contourner les mécanismes de protection du kernel Linux
          Parachute Play        
Read-Aloud Books
Boing by Nick Bruel
Bounce by Doreen Cronin
Bubble Gum, Bubble Gum by Lisa Wheeler
Emily Loves to Bounce by Stephen Michael
Way Down Deep in the Deep Blue Sea by Jan Peck

Songs
Glad to See You by Peter and Ellen Allard
Icky Sticky Bubble Gum by David Belafonte
Jump in the Line by Harry Belafonte

Items to Bounce On Parachute
Colored puffballs ("Bubble Gum")  
White puffballs or crumbled balls of white paper ("Popcorn")
Five monkey puppets 
Five goldfish puppets

Bubble Gum Chants
Bubble gum, bubble gum, chew and blow,
Bubble gum, bubble gum, scrape your toe,
Bubble gum, bubble gum, tastes so sweet,
Get that bubble gum off your feet!

Bubblegum, bubblegum, in a dish.
How many pieces do you wish?
Bubblegum, bubblegum, tastes so sweet.
How many pieces can you eat? (1, 2, 3...)

Popcorn Chants
One little kernel, sleeping in the pot
Turn on the heat, and watch it pop!
(Add more "Popcorn" to bounce on parachute: Two little kernels sleeping in the pot…)

You put the oil in the pot and you let it get hot
You put the popcorn in and you start to grin.
Sizzle, sizzle
Sizzle, sizzle
Sizzle, sizzle
Sizzle, sizzle
Sizzle, sizzle
Sizzle, sizzle
POP!
(Children begin in a crouched position, knees bent, then s-l-o-w-l-y rise until the final POP! when we jump up in the air!)

Five little popcorns sitting in a pan
One got hot and it went…. BAM!
(Catch bouncing popcorn, one by one, and remove from parachute)

Monkey Activities
One little monkey swinging in the tree,
Where's another monkey to play with me?
(Add another monkey to bounce on parachute,"Two little monkeys swinging in the tree...")

Five little monkeys jumping on the bed
One fell off and bumped his head
Mama called the doctor, and the doctor said,
"No more monkeys jumping on the bed."
(Catch bouncing monkeys, one by one, and remove from parachute) 

Goldfish Activities
One little goldfish swimming in the sea,
"Where's another goldfish to swim with me?"
(Add another goldfish to bounce on parachute:"Two little goldfish, swimming in the sea...") 

Five little goldfish swimming in the sea
Teasing Mr. Shark, "You can't catch me!"
Along comes Mr. Shark, quiet as can be and -
SNAPS that goldfish out of the sea!
(Catch bouncing goldfish, one by one, and remove from parachute)

When the Parachute Goes Up
When the parachute goes up, stomp your feet
When the parachute goes up, stomp your feet
When the parachute goes high, and lifts up toward the sky
When the parachute goes up, stomp your feet
(Repeat with: bend your knees, shake your head, shout hooray!)
          Comment on MySQL is gone. Here comes MariaDB and Drizzle. by Quique        
Cabe recordar que MariaDB no el único <i>fork</i> de MySQL. <b>Drizzle</b> (http://www.drizzle.org/) deriva del código de MySQL v6.0, pero con un rediseño de la arquitectura, que es de tipo microkernel. Drizzle es un proyecto 100% software libre, dirigido por su comunidad (no por una persona que ya vendió MySQL). Participa activamente en el GSoC. Para más información, visitad su web o la Wikipedia.
          Linux es fácil        

Tanto como extirparse un cálculo renal con una cucharilla de postre. Usando los pies.

Sí, estoy un poco mosqueado.

Desde la actualización de Ubuntu "Jaunty" a "Karmic" tenía varios "problemillas". El primero, que no funcionaba el dongle USB Wifi, con lo que no tenía Internet. El segundo, que no funcionaba el sonido. Nada importante, como ven. Lo normal en una actualización entre distribuciones (presuntamente) estables. Para que luego digan de Debian.

El problema de la Wifi lo solucioné cambiando de dongle (por suerte, tenía dos distintos). El problema del sonido lo arreglé tocando la configuración de pulseaudio tras una furiosa búsqueda en Google y Launchpad. pulseaudio es como un supositorio: sabes que a largo plazo será bueno para ti, pero el mal rato que vas a pasar mientras te lo ponen no te lo quita nadie. Ahora mismo hay mucha gente a la que le da problemas, pero es una arquitectura superior que permite mucho más de lo que permitían los servidores de sonido actuales. Todas las distribuciones lo han adoptado, y a pesar de lo que parece, en general funciona bien.

Pero lo dicho: un supositorio.

El cambio que tuve que hacer en la configuración de pulseaudio fue mínimo: descomentar una línea y ya está. Pero para alguien que se acabe de instalar Ubuntu no sería tan fácil. Tendría que bucear entre los bugs de Ubuntu, saber qué es ALSA, saber qué es pulseaudio ... No sería fácil. En absoluto.

Y lo de hoy fue la guinda. Me di cuenta de que no me funcionaba la entrada de línea. Ahí es donde enchufo la salida del multiefectos de la guitarra, con lo que si no hay entrada de línea, no hay sonido de guitarra. Pensé que sería cuestión de jugar un poco con alsamixer, como otras veces que he actualizado el kernel. Pero no: a pesar de que los volúmenes estaban bien, y de que en el monitor de pulseaudio se veía que había sonido en la entrada de línea, no se oía nada de lo que entraba por ahí.

La primera búsqueda de Google me llevó a probar a indicar un parámetro model para el módulo snd-hda-intel, según esta lista. Ya lo había probado hacía tiempo cuando tuve otros problemas con la tarjeta de sonido, y aunque entonces lo arregló, ahora no hubo suerte. De hecho, el problema que tenía entonces era el mismo que el de ahora, y un par de búsquedas más me hicieron recordar qué era lo que había tenido que hacer: activar el analog loopback de la tarjeta, uno de los "switches" de alsamixer.

Que no aparecía por ningún lado.

Seguí buscando y vi que a otra gente le había pasado lo mismo: el control de analog loopback había desaparecido. En ese bug se indicaba una entrada del changelog de ALSA en el que se decía que, como daba muchos problemas, lo habían quitado por defecto. Ahora se podía activar con el hint "loopback = yes".

Por supuesto, ni idea de a qué se refería con lo de "hint", ni cómo aplicarlo.

En el bug no lo explicaban. Siempre se agradece que te digan parte de la solución a tu problema para que puedas probarte a ti mismo encontrando el resto. Eso curte. Crea carácter. Distingue a los hombres de los niños. Si yo estuviera probando Ubuntu, al llegar aquí hubiera tirado el ratón por la ventana en un ataque de frustración y hubiera instalado Windows.

Media hora y varias búsquedas en Google más tarde, encontré un documento llamado "More notes on HD-Audio driver" que está en los fuentes de ALSA. Este pequeño fichero de 16 páginas de amena documentación técnica, en texto plano, es lo que cualquier usuario medio podría leer y comprender en un momento para hacer troubleshooting de sus problemas de sonido. Si "usuario medio" implica varios años de experiencia con Linux, claro.

En la sección "HD-Audio Reconfiguration" explica que el módulo snd-hda-intel se puede reconfigurar en caliente usando los ficheros de /sys. En este punto es necesario incluir el texto original, para gozarlo en toda su gloria:

The following sysfs
files are available under each codec-hwdep device directory (e.g. 
/sys/class/sound/hwC0D0):

vendor_id::
  Shows the 32bit codec vendor-id hex number.  You can change the
  vendor-id value by writing to this file.
subsystem_id::
  Shows the 32bit codec subsystem-id hex number.  You can change the
  subsystem-id value by writing to this file.
revision_id::
  Shows the 32bit codec revision-id hex number.  You can change the
  revision-id value by writing to this file.
afg::
  Shows the AFG ID.  This is read-only.
mfg::
  Shows the MFG ID.  This is read-only.
name::
  Shows the codec name string.  Can be changed by writing to this
  file.
modelname::
  Shows the currently set `model` option.  Can be changed by writing
  to this file.
init_verbs::
  The extra verbs to execute at initialization.  You can add a verb by
  writing to this file.  Pass three numbers: nid, verb and parameter
  (separated with a space).
hints::
  Shows / stores hint strings for codec parsers for any use.
  Its format is `key = value`.  For example, passing `hp_detect = yes`
  to IDT/STAC codec parser will result in the disablement of the
  headphone detection.
init_pin_configs::
  Shows the initial pin default config values set by BIOS.
driver_pin_configs::
  Shows the pin default values set by the codec parser explicitly.
  This doesn't show all pin values but only the changed values by
  the parser.  That is, if the parser doesn't change the pin default
  config values by itself, this will contain nothing.
user_pin_configs::
  Shows the pin default config values to override the BIOS setup.
  Writing this (with two numbers, NID and value) appends the new
  value.  The given will be used instead of the initial BIOS value at
  the next reconfiguration time.  Note that this config will override
  even the driver pin configs, too.
reconfig::
  Triggers the codec re-configuration.  When any value is written to
  this file, the driver re-initialize and parses the codec tree
  again.  All the changes done by the sysfs entries above are taken
  into account.
clear::
  Resets the codec, removes the mixer elements and PCM stuff of the
  specified codec, and clear all init verbs and hints.

Pero ahí estaba: el fichero hints, que es a lo que se referían en el changelog. Supuse entonces que lo que había que hacer era añadir loopback = yes a ese fichero, tal que así:

echo "loopback = yes" > /sys/class/sound/hwC0D0/hints

Y luego, reconfigurar la tarjeta:

echo 1 > /sys/class/sound/hwC0D0/reconfig

Con eso ya aparecía el control de analog loopback en alsamixer, y ya funcionaba el sonido de la entrada de línea.

¡Jo, qué fácil era!

Si es que al final uno se da cuenta de que no es que Linux sea difícil: es sólo que no se ha parado a buscar durante hora y media entre bugs, posts de foros y documentación técnica. Cualquiera podría hacerlo.

Creo que éste tampoco será el año de Linux en el escritorio.

PS: y por encima, un bot me ha llenado de spam varias de las entradas del weblog. Ya ni del capcha se puede fiar uno.


          Svenskarna intar Sundance i USA        
Filmfestivalen Sundance i USA har dragit igång. Vi pratar med de deltagande svenska regissörerna Amanda Kernell och Tarik Saleh. Dessutom kulturredaktionens Lisa Bergström om Sundance betydelse.
          Convexified Convolutional Neural Networks - implementation -        




Convexified Convolutional Neural Networks by Yuchen Zhang, Percy Liang, Martin J. Wainwright

We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented as a low-rank matrix, which can be relaxed to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layer-wise manner. Empirically, CCNNs achieve performance competitive with CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.

The ICML version is here. The code for the paper is at: https://github.com/zhangyuc/CCNN
The worksheet for this paper is on CodaLab here.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

          Papers: ICML2017 workshop on Implicit Models        
The organizers (David Blei, Ian Goodfellow, Balaji Lakshminarayanan, Shakir Mohamed, Rajesh Ranganath, Dustin Tran ) made the papers of the ICML2017 workshop on Implicit Models available here:

  1. A-NICE-MC: Adversarial Training for MCMC Jiaming Song, Shengjia Zhao, Stefano Ermon
  2. ABC-GAN: Adaptive Blur and Control for improved training stability of Generative Adversarial Networks Igor Susmelj, Eirikur Agustsson, Radu Timofte
  3. Adversarial Inversion for Amortized Inference Zenna Tavares, Armando Solar Lezama
  4. Adversarial Variational Inference for Tweedie Compound Poisson Models Yaodong Yang, Sergey Demyanov, Yuanyuan Liu, Jun Wang
  5. Adversarially Learned Boundaries in Instance Segmentation Amy Zhang
  6. Approximate Inference with Amortised MCMC Yingzhen Li, Richard E. Turner, Qiang Liu
  7. Can GAN Learn Topological Features of a Graph? Weiyi Liu, Pin-Yu Chen, Hal Cooper, Min Hwan Oh, Sailung Yeung, Toyotaro Suzumura
  8. Conditional generation of multi-modal data using constrained embedding space mapping Subhajit Chaudhury, Sakyasingha Dasgupta, Asim Munawar, Md. A. Salam Khan and Ryuki Tachibana
  9. Deep Hybrid Discriminative-Generative Models for Semi-Supervised Learning Volodymyr Kuleshov, Stefano Ermon
  10. ELFI, a software package for likelihood-free inference Jarno Lintusaari, Henri Vuollekoski, Antti Kangasrääsiö, Kusti Skyten, Marko Järvenpää, Michael Gutmann, Aki Vehtari, Jukka Corander, Samuel Kaski
  11. Flow-GAN: Bridging implicit and prescribed learning in generative models Aditya Grover, Manik Dhar, Stefano Ermon
  12. GANs Powered by Autoencoding — A Theoretic Reasoning Zhifei Zhang, Yang Song, and Hairong Qi
  13. Geometric GAN Jae Hyun Lim and Jong Chul Ye
  14. Gradient Estimators for Implicit Models Yingzhen Li, Richard E. Turner
  15. Implicit Manifold Learning on Generative Adversarial Networks Kry Yik Chau Lui, Yanshuai Cao, Maxime Gazeau, Kelvin Shuangjian Zhang
  16. Implicit Variational Inference with Kernel Density Ratio Fitting Jiaxin Shi, Shengyang Sun, Jun Zhu
  17. Improved Network Robustness with Adversarial Critic Alexander Matyasko, Lap-Pui Chau
  18. Improved Training of Wasserstein GANs Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville
  19. Inference in differentiable generative models Matthew M. Graham and Amos J. Storkey
  20. Joint Training in Generative Adversarial Networks R Devon Hjelm, Athul Paul Jacob, Yoshua Bengio
  21. Latent Space GANs for 3D Point Clouds Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, Leonidas Guibas
  22. Likelihood Estimation for Generative Adversarial Networks Hamid Eghbal-zadeh, Gerhard Widmer
  23. Maximizing Independence with GANs for Non-linear ICA Philemon Brakel, Yoshua Bengio 
  24. Non linear Mixed Effects Models: Bridging the gap between Independent Metropolis Hastings and Variational Inference Belhal Karimi
  25. Practical Adversarial Training with Empirical Distribution Ambrish Rawat, Mathieu Sinn, Maria-Irina Nicolae
  26. Recursive Cross-Domain Facial Composite and Generation from Limited Facial Parts Yang Song, Zhifei Zhang, Hairong Qi
  27. Resampled Proposal Distributions for Variational Inference and Learning Aditya Grover, Ramki Gummadi, Miguel Lazaro-Gredil, Dale Schuurmans, Stefano Ermon
  28. Rigorous Analysis of Adversarial Training with Empirical Distributions Mathieu Sinn, Ambrish Rawat, Maria-Irina Nicolae
  29. Robust Controllable Embedding of High-Dimensional Observations of Markov Decision Processes Ershad Banijamali, Rui Shu, Mohammad Ghavamzadeh, Hung Bui
  30. Spectral Normalization for Generative Adversarial Network Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida
  31. Stabilizing the Conditional Adversarial Network by Decoupled Learning Zhifei Zhang, Yang Song, and Hairong Qi
  32. Stabilizing Training of Generative Adversarial Networks through Regularization Kevin Roth, Aurelien Lucchi, Sebastian Nowozin & Thomas Hofmann
  33. Stochastic Reconstruction of Three-Dimensional Porous Media using Generative Adversarial Networks Lukas Mosser, Olivier Dubrule, Martin J. Blunt
  34. The Amortized Bootstrap Eric Nalisnick, Padhraic Smyth 
  35. The Numerics of GANs Lars Mescheder, Sebastian Nowozin, Andreas Geiger
  36. Towards the Use of Gaussian Graphical Models in Variational Autoencoders Alexandra Pește, Luigi Malagò
  37. Training GANs with Variational Statistical Information Minimization Michael Ghaben
  38. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks Jun-Yan Zhu*, Taesung Park*, Phillip Isola, Alexei A. Efros
  39. Unsupervised Domain Adaptation Using Approximate Label Matching Jordan T. Ash, Robert E. Schapire, Barbara E. Englhardt
  40. Variance Regularizing Adversarial Learning Karan Grewal, R Devon Hjelm, Yoshua Bengio
  41. Variational Representation Autoencoders to Reduce Mode Collapse in GANs Akash Srivastava, Lazar Valkov, Chris Russell, Michael U. Gutmann, Charles Sutton


Dougal Sutherland, Evaluating and Training Implicit Generative Models with Two-Sample Tests
Samples from implicit generative models are difficult to judge quantitatively: particularly for images, it is typically easy for humans to identify certain kinds of samples which are very unlikely under the reference distribution, but very difficult for humans to identify when modes are missing, or when types are merely under- or over-represented. This talk will overview different approaches towards evaluating the output of an implicit generative model, with a focus on identifying ways in which the model has failed. Some of these approaches also form the basis for the objective functions of GAN variants which can help avoid some of the issues of stability and mode-dropping in the original GAN.
Kerrie Mengerson, Probabilistic Modelling in the Real World
Interest is intensifying in the development and application of Bayesian approaches to estimation of real-world processes using probabilistic models. This presentation will focus on three substantive case studies in which we have been involved: protecting the Great Barrier Reef in Australia from impacts such as crown of thorns starfish and industrial dredging, reducing congestion at international airports, and predicting survival of jaguars in the Peruvian Amazon. Through these examples, we will explore current ideas about Approximate Bayesian Computation, Populations of Models, Bayesian priors and p-values, and Bayesian dynamic networks.

Sanjeev Arora, Do GANs actually learn the distribution? Some theory and empirics
The Generative Adversarial Nets or GANs framework (Goodfellow et al'14) for learning distributions differs from older ideas such as autoencoders and deep Boltzmann machines in that it scores the generated distribution using a discriminator net, instead of a perplexity-like calculation. It appears to work well in practice, e.g., the generated images look better than older techniques. But how well do these nets learn the target distribution?
Our paper 1 (ICML'17) shows GAN training may not have good generalization properties; e.g., training may appear successful but the trained distribution may be far from target distribution in standard metrics. We show theoretically that this can happen even though the 2-person game between discriminator and generator is in near-equilibrium, where the generator appears to have "won" (with respect to natural training objectives).
Paper2 (arxiv June 26) empirically tests whether this lack of generalization occurs in real-life training. The paper introduces a new quantitative test for diversity of a distribution based upon the famous birthday paradox. This test reveals that distributions learnt by some leading GANs techniques have fairly small support (i.e., suffer from mode collapse), which implies that they are far from the target distribution.
Paper 1: "Equilibrium and Generalization in GANs" by Arora, Ge, Liang, Ma, Zhang. (ICML 2017)
Paper 2: "Do GANs actually learn the distribution? An empirical study." by Arora and Zhang (https://arxiv.org/abs/1706.08224)

Stefano Ermon, Generative Adversarial Imitation Learning
Consider learning a policy from example expert behavior, without interaction with the expert or access to a reward or cost signal. One approach is to recover the expert’s cost function with inverse reinforcement learning, then compute an optimal policy for that cost function. This approach is indirect and can be slow. In this talk, I will discuss a new generative modeling framework for directly extracting a policy from data, drawing an analogy between imitation learning and generative adversarial networks. I will derive a model-free imitation learning algorithm that obtains significant performance gains over existing methods in imitating complex behaviors in large, high-dimensional environments. Our approach can also be used to infer the latent structure of human demonstrations in an unsupervised way. As an example, I will show a driving application where a model learned from demonstrations is able to both produce different driving styles and accurately anticipate human actions using raw visual inputs.
Qiang Liu
Wild Variational Inference with Expressive Variational Families
Variational inference (VI) provides a powerful tool for reasoning with highly complex probabilistic models in machine learning. The basic idea of VI is to approximate complex target distributions with simpler distributions found by minimizing the KL divergence within some predefined parametric families. A key limitation of the typical VI techniques, however, is that they require the variational family to be simple enough to have tractable likelihood functions, which excludes a broad range of flexible, expressive families such as these defined via implicit models. In this talk, we will discuss a general framework for (wild) variational inference that works for much more expressive, implicitly defined variational families with intractable likelihood functions. Our key idea is to first lift the optimization problem into the infinite dimensional space, solved using nonparametric particle methods, and then project the update back to the finite dimensional parameter space that we want to optimize with. Our framework is highly general and allows us to leverage any existing particle methods as the inference engine for wild variational inference, including MCMC and Stein variational gradient methods.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

          SGD, What Is It Good For ?        
Image Credit: NASA/JPL-Caltech/Space Science Institute, 
N00284488.jpg, Titan, Jul. 11, 2017 10:12 AM

As noted per the activity on the subject, there is growing interest in understanding better SGD and related methods, we mentioned two such study recently on Nuit Blanche

Sebastian Ruder updated his blog entry on the subject in An overview of gradient descent optimization algorithms (Added derivations of AdaMax and Nadam). In Reinforcement Learning or Evolutionary Strategies? Nature has a solution: BothArthur Juliani makes a mention of an insight on gradient based methods in RL (h/t Tarin for the pointer on Twitter)

It is clear that for many reactive policies, or situations with extremely sparse rewards, ES is a strong candidate, especially if you have access to the computational resources that allow for massively parallel training. On the other hand, gradient-based methods using RL or supervision are going to be useful when a rich feedback signal is available, and we need to learn quickly with less data.

But we also had people trying to speed SGD up while others put some grain of salt in the whole adaptive approach. We also have one where SGD helped by random features helps in solving the linear Bellman equation, a tool central in linear control theory. 
Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves ~90% scaling efficiency when moving from 8 to 256 GPUs. This system enables us to train visual recognition models on internet-scale data with high efficiency. 


Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. Examples include AdaGrad, RMSProp, and Adam. We show that for simple overparameterized problems, adaptive methods often find drastically different solutions than gradient descent (GD) or stochastic gradient descent (SGD). We construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half. We additionally study the empirical generalization capability of adaptive methods on several state-of-the-art deep learning models. We observe that the solutions found by adaptive methods generalize worse (often significantly worse) than SGD, even when these solutions have better training performance. These results suggest that practitioners should reconsider the use of adaptive methods to train neural networks.

We introduce a data-efficient approach for solving the linear Bellman equation, which corresponds to a class of Markov decision processes (MDPs) and stochastic optimal control (SOC) problems. We show that this class of control problem can be cast as a stochastic composition optimization problem, which can be further reformulated as a saddle point problem and solved via dual kernel embeddings [1]. Our method is model-free and using only one sample per state transition from stochastic dynamical systems. Different from related work such as Z-learning [2, 3] based on temporal-difference learning [4], our method is an online algorithm following the true stochastic gradient. Numerical results are provided, showing that our method outperforms the Z-learning algorithm


Gradient descent optimization algorithms, while increasingly popular, are often used as black-box optimizers, as practical explanations of their strengths and weaknesses are hard to come by. This article aims to provide the reader with intuitions with regard to the behaviour of different algorithms that will allow her to put them to use. In the course of this overview, we look at different variants of gradient descent, summarize challenges, introduce the most common optimization algorithms, review architectures in a parallel and distributed setting, and investigate additional strategies for optimizing gradient descent.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

          putin ajutor despre ./configure si make make install in Ajutor și asistență tehnică : Instalare, dezinstalare, actualizare, update. upgrade ...        
Topic: putin ajutor despre ./configure si make make install Message: Este o conventie la programe si/sau kernel , sa se foloseasca make, make install, orice program are fisierul README acolo explica cum se compileaza sau instaleaza un program, exista si dezinstalare dar nu este obligatoriu, asta numai daca autorul programului a construit instructiunile pt asa ceva.De obicei dezinstalalrea se face prin stergerea fisierelor copiate prin comanda make install, spre exemplu la kernel se stergfisierele din lib/modules/4.6.2 si din boot cele care au in componenta 4.6.2 asta daca compilezi kernelul 4.6.2 . Bineinteles daca debianizezi programul i-l poti instala/dezinstala ca pe orice program cu extensia .deb .
          ì˜ë£Œê¸°ê¸° Application 개발 (Android Application 개발) / 시스템 소프트웨어 개발 (Android kernel device driver 개발)        
...
          Porting NetBSD to Allwinner H3 SoCs        

A new SUNXI evbarm kernel has appeared recently in NetBSD -current with support for boards based on the Allwinner H3 system on a chip (SoC). The H3 SoC is a quad-core Cortex-A7 SoC designed primarily for set-top boxes, but has managed to find its way into many single-board computers (SBC). This is one of the first evbarm ports built from the ground up with device tree support, which helps us to use a single kernel config to support many different boards.

To get these boards up and running, first we need to deal with low-level startup code. For the SUNXI kernel this currently lives in sys/arch/evbarm/sunxi/. The purpose of this code is fairly simple; initialize the boot CPU and initialize the MMU so we can jump to the kernel. The initial MMU configuration needs to cover a few things -- early on we need to be able to access the kernel, UART debug console, and the device tree blob (DTB) passed in from U-Boot. We wrap the kernel in a U-Boot header that claims to be a Linux kernel; this is no accident! This tells U-Boot to use the Linux boot protocol when loading the kernel, which ensures that the DTB (loaded by U-Boot) is processed and passed to us in r2.

Once the CPU and MMU are ready, we jump to the generic ARM FDT implementation of initarm in sys/arch/evbarm/fdt/fdt_machdep.c. The first thing this code does is validate and relocate the DTB data. After it has been relocated, we compare the compatible property of the root node in the device tree with the list of ARM platforms compiled into the kernel. The Allwinner sunxi platform code lives in sys/arch/arm/sunxi/sunxi_platform.c. The sunxi platform code provides SoC-specific versions of code needed early at boot. We need to know how to initialize the debug console, spin up application CPUs, reset the board, etc.

Instead of writing H3-specific code for spinning up application CPUs, I took advantage of U-Boot's Power State Coordination Interface implementation. A psci(4) driver was added and the allwinner,sun8i-h3 platform code was modified to use this code to start up all processors.

With a bit of luck, we're now booting and enumerating devices. Apart from a few devices, almost nothing works yet as we are missing a driver for the CCU. The CCU in the Allwinner H3 SoC controls PLLs and most of the clock generation, division, muxing, and gating. Since there are many similarities between Allwinner SoCs, I opted to write generic CCU code and then SoC-specific frontends. The resulting code lives in sys/arch/arm/sunxi/; generic code as sunxi_ccu.c and H3-specific code in sun8i_h3_ccu.c.

Now we have a CCU driver, we can attach a com(4) and have a valid console device.

After this, it's a matter of writing drivers and/or adapting existing code to attach to fdtbus based on the bindings used in the DTB. For cases where we had a compatible driver in the old Allwinner port, I opted to make a copy of the code and FDT-ize it. A few reasons for this -- 1) the old drivers have CCU-specific code with per-SoC ifdefs scattered throughout, 2) I didn't want to break existing kernels, and 3) long term goal is to move the SoCs supported by the old code over to the new code (this process has already started with the Allwinner A31 port).

So what do we get out of this? This is a step towards being able to ship a GENERIC evbarm kernel. I developed the H3 port on two boards, the NanoPi NEO and Orange Pi Plus 2E, but since then users on port-arm@ have been reporting success on many other H3 boards, all from a single kernel config. In addition, I've added support for other Allwinner SoCs (sun8i-a83t, sun6i-a31) to the kernel and have tested booting the same kernel across all 3 SoCs.

Orange Pi Plus 2E boot log is below.

U-Boot SPL 2017.05 (Jul 01 2017 - 17:11:09)
DRAM: 2048 MiB
Trying to boot from MMC1


U-Boot 2017.05 (Jul 01 2017 - 17:11:09 -0300) Allwinner Technology

CPU:   Allwinner H3 (SUN8I 1680)
Model: Xunlong Orange Pi Plus 2E
I2C:   ready
DRAM:  2 GiB
MMC:   SUNXI SD/MMC: 0, SUNXI SD/MMC: 1
In:    serial
Out:   serial
Err:   serial
Net:   phy interface7
eth0: ethernet@1c30000
starting USB...
USB0:   USB EHCI 1.00
USB1:   USB OHCI 1.0
USB2:   USB EHCI 1.00
USB3:   USB OHCI 1.0
USB4:   USB EHCI 1.00
USB5:   USB OHCI 1.0
scanning bus 0 for devices... 2 USB Device(s) found
scanning bus 2 for devices... 1 USB Device(s) found
scanning bus 4 for devices... 1 USB Device(s) found
       scanning usb for storage devices... 0 Storage Device(s) found
Hit any key to stop autoboot:  0
reading netbsd.ub
6600212 bytes read in 334 ms (18.8 MiB/s)
reading sun8i-h3-orangepi-plus2e.dtb
16775 bytes read in 49 ms (334 KiB/s)
## Booting kernel from Legacy Image at 42000000 ...
   Image Name:   NetBSD/sunxi 8.99.1
   Image Type:   ARM Linux Kernel Image (uncompressed)
   Data Size:    6600148 Bytes = 6.3 MiB
   Load Address: 40008000
   Entry Point:  40008000
   Verifying Checksum ... OK
## Flattened Device Tree blob at 43000000
   Booting using the fdt blob at 0x43000000
   Loading Kernel Image ... OK
   Loading Device Tree to 49ff8000, end 49fff186 ... OK

Starting kernel ...

[ Kernel symbol table missing! ]
Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
    2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017
    The NetBSD Foundation, Inc.  All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
    The Regents of the University of California.  All rights reserved.

NetBSD 8.99.1 (SUNXI) #304: Sat Jul  8 11:01:22 ADT 2017
        jmcneill@undine.invisible.ca:/usr/home/jmcneill/netbsd/cvs-src/sys/arch/evbarm/compile/obj/SUNXI
total memory = 2048 MB
avail memory = 2020 MB
sysctl_createv: sysctl_create(machine_arch) returned 17
armfdt0 (root)
fdt0 at armfdt0: Xunlong Orange Pi Plus 2E
fdt1 at fdt0
fdt2 at fdt0
cpus0 at fdt0
cpu0 at cpus0: Cortex-A7 r0p5 (Cortex V7A core)
cpu0: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu0: 32KB/32B 2-way L1 VIPT Instruction cache
cpu0: 32KB/64B 4-way write-back-locking-C L1 PIPT Data cache
cpu0: 512KB/64B 8-way write-through L2 PIPT Unified cache
vfp0 at cpu0: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
cpu1 at cpus0
cpu2 at cpus0
cpu3 at cpus0
gic0 at fdt1: GIC
armgic0 at gic0: Generic Interrupt Controller, 160 sources (150 valid)
armgic0: 16 Priorities, 128 SPIs, 7 PPIs, 15 SGIs
fclock0 at fdt2: 24000000 Hz fixed clock
ffclock0 at fdt2: x1 /1 fixed-factor clock
fclock1 at fdt2: 32768 Hz fixed clock
sunxigates0 at fdt2
sunxiresets0 at fdt1
gtmr0 at fdt0: Generic Timer
armgtmr0 at gtmr0: ARMv7 Generic 64-bit Timer (24000 kHz)
armgtmr0: interrupting on irq 27
sunxigpio0 at fdt1: PIO
gpio0 at sunxigpio0: 94 pins
sunxigpio1 at fdt1: PIO
gpio1 at sunxigpio1: 12 pins
sun8ih3ccu0 at fdt1: H3 CCU
fregulator0 at fdt0: vcc3v3
fregulator1 at fdt0: gmac-3v3
fregulator2 at fdt0: vcc3v0
fregulator3 at fdt0: vcc5v0
sunxiusbphy0 at fdt1: USB PHY
/soc/dma-controller@01c02000 at fdt1 not configured
/soc/codec-analog@01f015c0 at fdt1 not configured
/clocks/ir_clk@01f01454 at fdt2 not configured
sunxiemac0 at fdt1: EMAC
sunxiemac0: interrupting on GIC irq 114
rgephy0 at sunxiemac0 phy 0: RTL8169S/8110S/8211 1000BASE-T media interface, rev. 5
rgephy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-FDX, auto
rgephy1 at sunxiemac0 phy 1: RTL8169S/8110S/8211 1000BASE-T media interface, rev. 5
rgephy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT, 1000baseT-FDX, auto
psci0 at fdt0: PSCI 0.1
gpioleds0 at fdt0: orangepi:green:pwr orangepi:red:status
gpiokeys0 at fdt0: sw4
sunximmc0 at fdt1: SD/MMC controller
sunximmc0: interrupting on GIC irq 92
sunximmc1 at fdt1: SD/MMC controller
sunximmc1: interrupting on GIC irq 93
sunximmc2 at fdt1: SD/MMC controller
sunximmc2: interrupting on GIC irq 94
ehci0 at fdt1: EHCI
ehci0: interrupting on GIC irq 106
ehci0: 1 companion controller, 1 port
usb0 at ehci0: USB revision 2.0
ohci0 at fdt1: OHCI
ohci0: interrupting on GIC irq 107
ohci0: OHCI version 1.0
usb1 at ohci0: USB revision 1.0
ehci1 at fdt1: EHCI
ehci1: interrupting on GIC irq 108
ehci1: 1 companion controller, 1 port
usb2 at ehci1: USB revision 2.0
ohci1 at fdt1: OHCI
ohci1: interrupting on GIC irq 109
ohci1: OHCI version 1.0
usb3 at ohci1: USB revision 1.0
ehci2 at fdt1: EHCI
ehci2: interrupting on GIC irq 110
ehci2: 1 companion controller, 1 port
usb4 at ehci2: USB revision 2.0
ohci2 at fdt1: OHCI
ohci2: interrupting on GIC irq 111
ohci2: OHCI version 1.0
usb5 at ohci2: USB revision 1.0
/soc/timer@01c20c00 at fdt1 not configured
/soc/watchdog@01c20ca0 at fdt1 not configured
/soc/codec@01c22c00 at fdt1 not configured
com0 at fdt1: ns16550a, working fifo
com0: console
com0: interrupting on GIC irq 32
sunxirtc0 at fdt1: RTC
/soc/ir@01f02000 at fdt1 not configured
cpu3: Cortex-A7 r0p5 (Cortex V7A core)
cpu3: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu3: 32KB/32B 2-way L1 VIPT Instruction cache
cpu3: 32KB/64B 4-way write-back-locking-C L1 PIPT Data cache
cpu3: 512KB/64B 8-way write-through L2 PIPT Unified cache
vfp3 at cpu3: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
cpu1: Cortex-A7 r0p5 (Cortex V7A core)
cpu1: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu1: 32KB/32B 2-way L1 VIPT Instruction cache
cpu1: 32KB/64B 4-way write-back-locking-C L1 PIPT Data cache
cpu1: 512KB/64B 8-way write-through L2 PIPT Unified cache
vfp1 at cpu1: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
cpu2: Cortex-A7 r0p5 (Cortex V7A core)
cpu2: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu2: 32KB/32B 2-way L1 VIPT Instruction cache
cpu2: 32KB/64B 4-way write-back-locking-C L1 PIPT Data cache
cpu2: 512KB/64B 8-way write-through L2 PIPT Unified cache
vfp2 at cpu2: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
sdmmc0 at sunximmc0
sdmmc1 at sunximmc1
sdmmc2 at sunximmc2
uhub0 at usb0: Generic (0000) EHCI root hub (0000), class 9/0, rev 2.00/1.00, addr 1
uhub1 at usb2: Generic (0000) EHCI root hub (0000), class 9/0, rev 2.00/1.00, addr 1
uhub2 at usb3: Generic (0000) OHCI root hub (0000), class 9/0, rev 1.00/1.00, addr 1
uhub3 at usb1: Generic (0000) OHCI root hub (0000), class 9/0, rev 1.00/1.00, addr 1
uhub4 at usb4: Generic (0000) EHCI root hub (0000), class 9/0, rev 2.00/1.00, addr 1
uhub5 at usb5: Generic (0000) OHCI root hub (0000), class 9/0, rev 1.00/1.00, addr 1
ld2 at sdmmc2: <0x15:0x0100:AWPD3R:0x00:0xec19649f:0x000>
sdmmc0: SD card status: 4-bit, C10, U1, V10
ld0 at sdmmc0: <0x27:0x5048:2&DRP:0x07:0x01c828bc:0x109>
ld2: 14910 MB, 7573 cyl, 64 head, 63 sec, 512 bytes/sect x 30535680 sectors
ld0: 15288 MB, 7765 cyl, 64 head, 63 sec, 512 bytes/sect x 31309824 sectors
(manufacturer 0x24c, product 0xf179, standard function interface code 0x7)at sdmmc1 function 1 not configured
ld2: mbr partition exceeds disk size
ld0: 4-bit width, High-Speed/SDR25, 50.000 MHz
ld2: 8-bit width, 52.000 MHz
urtwn0 at uhub0 port 1
urtwn0: Realtek (0xbda) 802.11n NIC (0x8179), rev 2.00/0.00, addr 2
urtwn0: MAC/BB RTL8188EU, RF 6052 1T1R, address e8:de:27:16:0c:81
urtwn0: 1 rx pipe, 2 tx pipes
urtwn0: 11b rates: 1Mbps 2Mbps 5.5Mbps 11Mbps
urtwn0: 11g rates: 1Mbps 2Mbps 5.5Mbps 11Mbps 6Mbps 9Mbps 12Mbps 18Mbps 24Mbps 36Mbps 48Mbps 54Mbps
boot device: ld0
root on ld0a dumps on ld0b
root file system type: ffs
kern.module.path=/stand/evbarm/8.99.1/modules
WARNING: clock lost 6398 days
WARNING: using filesystem time
WARNING: CHECK AND RESET THE DATE!
Sat Jul  8 11:05:42 ADT 2017
Starting root file system check:
/dev/rld0a: file system is clean; not checking
Not resizing /: already correct size
swapctl: adding /dev/ld0b as swap device at priority 0
Starting file system checks:
/dev/rld0e: 22 files, 32340 free (8085 clusters)
random_seed: /var/db/entropy-file: Not present
Setting tty flags.
Setting sysctl variables:
ddb.onpanic: 1 -> 1
Starting network.
Hostname: sunxi
IPv6 mode: host
Configuring network interfaces:.
Adding interface aliases:.
Waiting for DAD to complete for statically configured addresses...
Starting dhcpcd.
Starting mdnsd.
Building databases: dev, utmp, utmpx.
Starting syslogd.
Mounting all file systems...
Clearing temporary files.
Updating fontconfig cache: done.
Creating a.out runtime link editor directory cache.
Checking quotas: done.
Setting securelevel: kern.securelevel: 0 -> 1
Starting virecover.
Checking for core dump...
savecore: no core dump
Starting devpubd.
Starting local daemons:.
Updating motd.
Starting ntpd.
Jul  8 11:05:58 sunxi ntpd[595]: ntp_rlimit: Cannot set RLIMIT_STACK: Invalid argument
Starting sshd.
Starting inetd.
Starting cron.
Sat Jul  8 11:06:02 ADT 2017

NetBSD/evbarm (sunxi) (console)

login:

          LLDB: Sanitizing the debugger's runtime        
This month I started to work on correcting of the ptrace(2) layer, as test suites used to trigger failures on the kernel side. This finally ended up sanitizing the LLDB runtime as well, addressing LLDB and NetBSD userland bugs.

It turned out that more bugs were unveiled and this is not the final report on LLDB.

The good

Besides the greater enhancements this month I performed a cleanup in the ATF ptrace(2) tests again. Additionally I have managed to unbreak the LLDB Debug build and to eliminate compiler warnings in the NetBSD Native Process Plugin.

It is worth noting that LLVM can run tests on NetBSD again, the patch in gtest/LLVM has been installed by Joerg Sonnenberg and a more generic one has been submitted to the upstream googletest repository. There was also an improvement in ftruncate(2) on the LLVM side (authored by Joerg).

Since LLD (the LLVM linker) is advancing rapidly, it improved support for NetBSD and it can link a functional executable on NetBSD. I submitted a patch to stop crashing it on startup anymore. It was nearly used for linking LLDB/NetBSD and it spotted a real linking error... however there are further issues that need to be addressed in the future. Currently LLD is not part of the mainline LLDB tasks - it's part of improving the work environment. This linker should reduce the linking time - compared to GNU linkers - of LLDB by a factor of 3x-10x and save precious developer time. As of now LLDB linking can take minutes on a modern amd64 machine designed for performance.

Kernel correctness

I have researched (in pkgsrc-wip) initial support for multiple threads in the NetBSD Native Process Plugin. This code revealed - when running the LLDB regression test-suite - new kernel bugs. This unfortunately affects the usability of a debugger in a multithread environment in general and explains why GDB was never doing its job properly in such circumstances.

One of the first errors was asserting kernel panic with PT_*STEP, when a debuggee has more than a single thread. I have narrowed it down to lock primitives misuse in the do_ptrace() kernel code. The fix has been committed.

LLDB and userland correctness

LLDB introduced support for kevent(2) and it contains the following function:

Status MainLoop::RunImpl::Poll() {
  in_events.resize(loop.m_read_fds.size());
  unsigned i = 0;
  for (auto &fd : loop.m_read_fds)
    EV_SET(&in_events[i++], fd.first, EVFILT_READ, EV_ADD, 0, 0, 0);
  num_events = kevent(loop.m_kqueue, in_events.data(), in_events.size(),
                      out_events, llvm::array_lengthof(out_events), nullptr);
  if (num_events < 0)
    return Status("kevent() failed with error %d\n", num_events);
  return Status();
}

It works on FreeBSD and MacOSX, however it broke on NetBSD.

Culprit line:

   EV_SET(&in_events[i++], fd.first, EVFILT_READ, EV_ADD, 0, 0, 0);

FreeBSD defined EV_SET() as a macro this way:

#define EV_SET(kevp_, a, b, c, d, e, f) do {    \
        struct kevent *kevp = (kevp_);          \
        (kevp)->ident = (a);                    \
        (kevp)->filter = (b);                   \
        (kevp)->flags = (c);                    \
        (kevp)->fflags = (d);                   \
        (kevp)->data = (e);                     \
        (kevp)->udata = (f);                    \
} while(0)

NetBSD version was different:

#define EV_SET(kevp, a, b, c, d, e, f)                                  \
do {                                                                    \
        (kevp)->ident = (a);                                            \
        (kevp)->filter = (b);                                           \
        (kevp)->flags = (c);                                            \
        (kevp)->fflags = (d);                                           \
        (kevp)->data = (e);                                             \
        (kevp)->udata = (f);                                            \
} while (/* CONSTCOND */ 0)
This resulted in heap damage, as keyp was incremented every time a value was assigned to (keyp)->.

Without GCC asan and ubsan tools, finding this bug would be much more time consuming, as the random memory corruption was affecting unrelated lambda function in a different part of the code.

To use the GCC sanitizers with packages from pkgsrc, on NetBSD-current, one has to use one or both of these lines:

_WRAP_EXTRA_ARGS.CXX+= -fno-omit-frame-pointer -O0 -g -ggdb -U_FORTIFY_SOURCE -fsanitize=address -fsanitize=undefined -lasan -lubsan
CWRAPPERS_APPEND.cxx+= -fno-omit-frame-pointer -O0 -g -ggdb -U_FORTIFY_SOURCE -fsanitize=address -fsanitize=undefined -lasan -lubsan

While there, I have fixed another - generic - bug in the LLVM headers. The class Triple constructor hadn't initialized the SubArch field, which upsetting the GCC address sanitizer. It was triggered in LLDB in the following code:

void ArchSpec::Clear() {
  m_triple = llvm::Triple();
  m_core = kCore_invalid;
  m_byte_order = eByteOrderInvalid;
  m_distribution_id.Clear();
  m_flags = 0;
}

I have filed a patch for review to address this.

The bad

Unfortunately this is not the full story and there is further mandatory work.

LLDB acceleration

The EV_SET() bug broke upstream LLDB over a month ago, and during this period the debugger was significantly accelerated and parallelized. It is difficult to declare it definitely, but it might be the reason why the tracer's runtime broke due to threading desynchronization. LLDB behaves differently when run standalone, under ktruss(1) and under gdb(1) - the shared bug is that it always fails in one way or another, which isn't trivial to debug.

The ugly

There are also unpleasant issues at the core of the Operating System.

Kernel troubles

Another bug with single-step functions that affects another aspect of correctness - this time with reliable execution of a program - is that processes die in non-deterministic ways when single-stepped. My current impression is that there is no appropriate translation between process and thread (LWP) states under a debugger.

These issues are sibling problems to unreliable PT_RESUME and PT_SUSPEND.

In order to be able to appropriately address this, I have diligently studied this month the Solaris Internals book to get a better image of the design of the NetBSD kernel multiprocessing, which was modeled after this commercial UNIX.

Plan for the next milestone

The current troubles can be summarized as data races in the kernel and at the same time in LLDB. I have decided to port the LLVM sanitizers, as I require the Thread Sanitizer (tsan). Temporarily I have removed the code for tracing processes with multiple threads to hide the known kernel bugs and focus on the LLDB races.

Unfortunately LLDB is not easily bisectable (build time of the LLVM+Clang+LLDB stack, number of revisions), therefore the debugging has to be performed on the most recent code from upstream trunk.

This work was sponsored by The NetBSD Foundation.

The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue funding projects and services to the open-source community. Please consider visiting the following URL, and chip in what you can:

http://netbsd.org/donations/#how-to-donate

          NetBSD on the NVIDIA Jetson TK1        

The NVIDIA Jetson TK1 is a quad-core ARMv7 development board that features an NVIDIA Tegra K1 (32-bit) SoC (quad-core Cortex-A15 @ 2.3GHz), 2GB RAM, gigabit ethernet, SATA, HDMI, mini-PCIE, and more.

Since my last status update on the port, HDMI video and audio support have been added along with a handful of stability fixes.

NetBSD -current includes support for this board with the JETSONTK1 kernel. The following hardware is supported:

  • Cortex-A15 (multiprocessor)
  • CPU frequency scaling
  • ARM generic timer
  • Clock and reset controller
  • GPIO controller
  • MPIO / pinmux controller
  • Memory controller
  • Power management controller
  • I2C controller
  • UART serial console
  • Watchdog timer
  • SDMMC controller
  • USB 2.0 controller
  • AHCI SATA controller
  • HD audio controller (HDMI audio)
  • HDMI framebuffer console
  • PCI express controller, including mini-PCIE slot
  • On-board Realtek 8111G gigabit ethernet
  • Serial EEPROM
  • Temperature sensor
  • RF kill switch
  • Power button

Of course, Xorg works too:

See the NetBSD/evbarm on NVIDIA Tegra wiki page for more details.

Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
    2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015
    The NetBSD Foundation, Inc.  All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
    The Regents of the University of California.  All rights reserved.

NetBSD 7.99.20 (JETSONTK1) #189: Sat Jul 25 12:47:31 ADT 2015
	jmcneill@megatron.local:/Users/jmcneill/netbsd/src/sys/arch/evbarm/compile/obj/JETSONTK1
total memory = 2047 MB
avail memory = 2021 MB
sysctl_createv: sysctl_create(machine_arch) returned 17
timecounter: Timecounters tick every 10.000 msec
mainbus0 (root)
cpu0 at mainbus0 core 0: 2292 MHz Cortex-A15 r3p3 (Cortex V7A core)
cpu0: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu0: sctlr: 0xc51c7d
cpu0: actlr: 0x80000041
cpu0: revidr: 0
cpu0: mpidr: 0x80000000
cpu0: isar: [0]=0x2101110 [1]=0x13112111 [2]=0x21232041 [3]=0x11112131, [4]=0x10011142, [5]=0
cpu0: mmfr: [0]=0x10201105 [1]=0x40000000 [2]=0x1240000 [3]=0x2102211
cpu0: pfr: [0]=0x1131 [1]=0x11011
cpu0: 32KB/64B 2-way L1 PIPT Instruction cache
cpu0: 32KB/64B 2-way write-back-locking-C L1 PIPT Data cache
cpu0: 2048KB/64B 16-way write-through L2 PIPT Unified cache
vfp0 at cpu0: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
vfp0: mvfr: [0]=0x10110222 [1]=0x11111111
cpu1 at mainbus0 core 1
cpu2 at mainbus0 core 2
cpu3 at mainbus0 core 3
armperiph0 at mainbus0
armgic0 at armperiph0: Generic Interrupt Controller, 192 sources (183 valid)
armgic0: 32 Priorities, 160 SPIs, 7 PPIs, 16 SGIs
armgtmr0 at armperiph0: ARMv7 Generic 64-bit Timer (12000 kHz)
armgtmr0: interrupting on irq 27
timecounter: Timecounter "armgtmr0" frequency 12000000 Hz quality 500
tegraio0 at mainbus0: Tegra K1 (T124)
tegracar0 at tegraio0: CAR
tegracar0: PLLX = 2292000000 Hz
tegracar0: PLLC = 88000000 Hz
tegracar0: PLLE = 292968 Hz
tegracar0: PLLU = 480000000 Hz
tegracar0: PLLP0 = 408000000 Hz
tegracar0: PLLD2 = 594000000 Hz
tegragpio0 at tegraio0: GPIO
gpio0 at tegragpio0 (A): 8 pins
gpio1 at tegragpio0 (B): 8 pins
gpio2 at tegragpio0 (C): 8 pins
gpio3 at tegragpio0 (D): 8 pins
gpio4 at tegragpio0 (E): 8 pins
gpio5 at tegragpio0 (F): 8 pins
gpio6 at tegragpio0 (G): 8 pins
gpio7 at tegragpio0 (H): 8 pins
gpio8 at tegragpio0 (I): 8 pins
gpio9 at tegragpio0 (J): 8 pins
gpio10 at tegragpio0 (K): 8 pins
gpio11 at tegragpio0 (L): 8 pins
gpio12 at tegragpio0 (M): 8 pins
gpio13 at tegragpio0 (N): 8 pins
gpio14 at tegragpio0 (O): 8 pins
gpio15 at tegragpio0 (P): 8 pins
gpio16 at tegragpio0 (Q): 8 pins
gpiobutton0 at gpio16 pins 0: Power button
gpio17 at tegragpio0 (R): 8 pins
gpio18 at tegragpio0 (S): 8 pins
gpio19 at tegragpio0 (T): 8 pins
gpio20 at tegragpio0 (U): 8 pins
gpio21 at tegragpio0 (V): 8 pins
gpio22 at tegragpio0 (W): 8 pins
gpio23 at tegragpio0 (X): 8 pins
gpiorfkill0 at gpio23 pins 7
gpio24 at tegragpio0 (Y): 8 pins
gpio25 at tegragpio0 (Z): 8 pins
gpio26 at tegragpio0 (AA): 8 pins
gpio27 at tegragpio0 (BB): 8 pins
gpio28 at tegragpio0 (CC): 8 pins
gpio29 at tegragpio0 (DD): 8 pins
gpio30 at tegragpio0 (EE): 8 pins
tegratimer0 at tegraio0: Timers
tegratimer0: default watchdog period is 10 seconds
tegramc0 at tegraio0: MC
tegrapmc0 at tegraio0: PMC
tegraxusbpad0 at tegraio0: XUSB PADCTL
tegrampio0 at tegraio0: MPIO
tegrai2c0 at tegraio0 port 0: I2C1
tegrai2c0: interrupting on irq 70
iic0 at tegrai2c0: I2C bus
seeprom0 at iic0 addr 0x56: AT24Cxx or compatible EEPROM: size 256
titemp0 at iic0 addr 0x4c: TMP451
tegrai2c1 at tegraio0 port 1: I2C2
tegrai2c1: interrupting on irq 116
iic1 at tegrai2c1: I2C bus
tegrai2c2 at tegraio0 port 2: I2C3
tegrai2c2: interrupting on irq 124
iic2 at tegrai2c2: I2C bus
tegrai2c3 at tegraio0 port 3: I2C4
tegrai2c3: interrupting on irq 152
iic3 at tegrai2c3: I2C bus
ddc0 at iic3 addr 0x50: DDC
tegrai2c4 at tegraio0 port 4: I2C5
tegrai2c4: interrupting on irq 85
iic4 at tegrai2c4: I2C bus
com3 at tegraio0 port 3: ns16550a, working fifo
com3: console
tegrartc0 at tegraio0: RTC
sdhc2 at tegraio0 port 2: SDMMC3
sdhc2: interrupting on irq 51
sdhc2: SDHC 4.0, rev 3, DMA, 48000 kHz, 3.0V 3.3V, 4096 byte blocks
sdmmc2 at sdhc2 slot 0
ahcisata0 at tegraio0: SATA
ahcisata0: interrupting on irq 55
ahcisata0: AHCI revision 1.31, 2 ports, 32 slots, CAP 0xe620ff01<PSC,SSC,PMD,ISS=0x2=Gen2,SAL,SALP,SSNTF,SNCQ,S64A>
atabus0 at ahcisata0 channel 0
hdaudio0 at tegraio0: HDA
hdaudio0: interrupting on irq 113
hdafg0 at hdaudio0: NVIDIA Tegra124 HDMI
hdafg0: HDMI00 8ch: Digital Out [Jack]
hdafg0: 8ch/0ch 48000Hz PCM16*
audio0 at hdafg0: full duplex, playback, capture, mmap, independent
ehci0 at tegraio0 port 0: USB1
ehci0: interrupting on irq 52
ehci0: EHCI version 1.10
ehci0: switching to host mode
usb0 at ehci0: USB revision 2.0
ehci1 at tegraio0 port 1: USB2
ehci1: interrupting on irq 53
ehci1: EHCI version 1.10
ehci1: switching to host mode
usb1 at ehci1: USB revision 2.0
ehci2 at tegraio0 port 2: USB3
ehci2: interrupting on irq 129
ehci2: EHCI version 1.10
ehci2: switching to host mode
usb2 at ehci2: USB revision 2.0
tegrahost1x0 at tegraio0: HOST1X
tegradc0 at tegraio0 port 0: DISPLAYA
tegradc1 at tegraio0 port 1: DISPLAYB
tegrahdmi0 at tegraio0: HDMI
tegrahdmi0: display connected
no data for est. mode 640x480x67
tegrahdmi0: connected to HDMI display
genfb0 at tegradc1 output tegrahdmi0
genfb0: framebuffer at 0x9ab00000, size 1920x1080, depth 32, stride 7680
wsdisplay0 at genfb0 kbdmux 1
wsmux1: connecting to wsdisplay0
wsdisplay0: screen 0-3 added (default, vt100 emulation)
tegrapcie0 at tegraio0: PCIE
tegrapcie0: interrupting on irq 130
pci0 at tegrapcie0 bus 0
pci0: memory space enabled, rd/line, rd/mult, wr/inv ok
ppb0 at pci0 dev 0 function 0: vendor 10de product 0e12 (rev. 0xa1)
ppb0: PCI Express capability version 2 <Root Port of PCI-E Root Complex> x2 @ 5.0GT/s
ppb0: link is x1 @ 2.5GT/s
pci1 at ppb0 bus 1
pci1: memory space enabled, rd/line, wr/inv ok
athn0 at pci1 dev 0 function 0athn0: Atheros AR9285
athn0: rev 2 (1T1R), ROM rev 13, address 00:17:c4:d7:d0:58
athn0: interrupting at irq 130
athn0: 11b rates: 1Mbps 2Mbps 5.5Mbps 11Mbps
athn0: 11g rates: 1Mbps 2Mbps 5.5Mbps 11Mbps 6Mbps 9Mbps 12Mbps 18Mbps 24Mbps 36Mbps 48Mbps 54Mbps
ppb1 at pci0 dev 1 function 0: vendor 10de product 0e13 (rev. 0xa1)
ppb1: PCI Express capability version 2 <Root Port of PCI-E Root Complex> x1 @ 5.0GT/s
ppb1: link is x1 @ 2.5GT/s
pci2 at ppb1 bus 2
pci2: memory space enabled, rd/line, wr/inv ok
re0 at pci2 dev 0 function 0: RealTek 8168/8111 PCIe Gigabit Ethernet (rev. 0x0c)
re0: interrupting at irq 130
re0: Ethernet address 00:04:4b:2f:51:a2
re0: using 512 tx descriptors
rgephy0 at re0 phy 7: RTL8251 1000BASE-T media interface, rev. 0
rgephy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT-FDX, auto
timecounter: Timecounter "clockinterrupt" frequency 100 Hz quality 0
cpu2: 2292 MHz Cortex-A15 r3p3 (Cortex V7A core)
cpu2: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu2: sctlr: 0xc51c7d
cpu2: actlr: 0x80000040
cpu2: revidr: 0
cpu2: mpidr: 0x80000002
cpu2: isar: [0]=0x2101110 [1]=0x13112111 [2]=0x21232041 [3]=0x11112131, [4]=0x10011142, [5]=0
cpu2: mmfr: [0]=0x10201105 [1]=0x40000000 [2]=0x1240000 [3]=0x2102211
cpu2: pfr: [0]=0x1131 [1]=0x11011
cpu2: 32KB/64B 2-way L1 PIPT Instruction cache
cpu2: 32KB/64B 2-way write-back-locking-C L1 PIPT Data cache
cpu2: 2048KB/64B 16-way write-through L2 PIPT Unified cache
vfp2 at cpu2: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
vfp2: mvfr: [0]=0x10110222 [1]=0x11111111
cpu1: 2292 MHz Cortex-A15 r3p3 (Cortex V7A core)
cpu1: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu1: sctlr: 0xc51c7d
cpu1: actlr: 0x80000040
cpu1: revidr: 0
cpu1: mpidr: 0x80000001
cpu1: isar: [0]=0x2101110 [1]=0x13112111 [2]=0x21232041 [3]=0x11112131, [4]=0x10011142, [5]=0
cpu1: mmfr: [0]=0x10201105 [1]=0x40000000 [2]=0x1240000 [3]=0x2102211
cpu1: pfr: [0]=0x1131 [1]=0x11011
cpu1: 32KB/64B 2-way L1 PIPT Instruction cache
cpu1: 32KB/64B 2-way write-back-locking-C L1 PIPT Data cache
cpu1: 2048KB/64B 16-way write-through L2 PIPT Unified cache
vfp1 at cpu1: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
vfp1: mvfr: [0]=0x10110222 [1]=0x11111111
cpu3: 2292 MHz Cortex-A15 r3p3 (Cortex V7A core)
cpu3: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu3: sctlr: 0xc51c7d
cpu3: actlr: 0x80000040
cpu3: revidr: 0
cpu3: mpidr: 0x80000003
cpu3: isar: [0]=0x2101110 [1]=0x13112111 [2]=0x21232041 [3]=0x11112131, [4]=0x10011142, [5]=0
cpu3: mmfr: [0]=0x10201105 [1]=0x40000000 [2]=0x1240000 [3]=0x2102211
cpu3: pfr: [0]=0x1131 [1]=0x11011
cpu3: 32KB/64B 2-way L1 PIPT Instruction cache
cpu3: 32KB/64B 2-way write-back-locking-C L1 PIPT Data cache
cpu3: 2048KB/64B 16-way write-through L2 PIPT Unified cache
vfp3 at cpu3: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
vfp3: mvfr: [0]=0x10110222 [1]=0x11111111
uhub0 at usb0: Tegra EHCI root hub, class 9/0, rev 2.00/1.00, addr 1
uhub0: 1 port with 1 removable, self powered
uhub1 at usb2: Tegra EHCI root hub, class 9/0, rev 2.00/1.00, addr 1
uhub1: 1 port with 1 removable, self powered
uhub2 at usb1: Tegra EHCI root hub, class 9/0, rev 2.00/1.00, addr 1
uhub2: 1 port with 1 removable, self powered
ahcisata0 port 0: device present, speed: 3.0Gb/s
ld1 at sdmmc2: <0x27:0x5048:SD64G:0x30:0x01ce4def:0x0dc>
ld1: 59504 MB, 7585 cyl, 255 head, 63 sec, 512 bytes/sect x 121864192 sectors
ld1: 4-bit width, bus clock 48.000 MHz
wd0 at atabus0 drive 0
wd0: <OCZ-AGILITY3>
wd0: drive supports 16-sector PIO transfers, LBA48 addressing
wd0: 111 GB, 232581 cyl, 16 head, 63 sec, 512 bytes/sect x 234441648 sectors
wd0: drive supports PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133)
wd0(ahcisata0:0:0): using PIO mode 4, DMA mode 2, Ultra-DMA mode 6 (Ultra/133) (using DMA)
uhidev0 at uhub0 port 1 configuration 1 interface 0
uhidev0: Logitech USB Receiver, rev 2.00/29.00, addr 2, iclass 3/1
ukbd0 at uhidev0: 8 modifier keys, 6 key codes
wskbd0 at ukbd0 mux 1
wskbd0: connecting to wsdisplay0
uhidev1 at uhub0 port 1 configuration 1 interface 1
uhidev1: Logitech USB Receiver, rev 2.00/29.00, addr 2, iclass 3/1
uhidev1: 17 report ids
ums0 at uhidev1 reportid 2: 16 buttons, W and Z dirs
wsmouse0 at ums0 mux 0
uhid0 at uhidev1 reportid 3: input=4, output=0, feature=0
uhid1 at uhidev1 reportid 4: input=1, output=0, feature=0
uhid2 at uhidev1 reportid 16: input=6, output=6, feature=0
uhid3 at uhidev1 reportid 17: input=19, output=19, feature=0
boot device: ld1
root on ld1a dumps on ld1b
mountroot: trying smbfs...
mountroot: trying ntfs...
mountroot: trying nfs...
mountroot: trying msdos...
mountroot: trying ext2fs...
mountroot: trying ffs...
root file system type: ffs
kern.module.path=/stand/evbarm/7.99.20/modules
WARNING: preposterous TOD clock time
WARNING: using filesystem time
WARNING: CHECK AND RESET THE DATE!
init: copying out path `/sbin/init' 11
WARNING: module error: vfs load failed for `compat', error 2
WARNING: module error: vfs load failed for `compat', error 2
WARNING: module error: vfs load failed for `compat', error 2
WARNING: module error: vfs load failed for `compat', error 2
WARNING: module error: vfs load failed for `compat', error 2
WARNING: module error: vfs load failed for `compat', error 2
re0: link state UP (was UNKNOWN)
athn0: link state UP (was UNKNOWN)

          CI20 status update        
I didn't really have much time to work on more hardware support on CI20 but it's been a while since the last post so here's what I've got:
  • drivers for on-chip ehci and ohci have been added. Ohci works fine, ehci for some reason detects all high speed devices as full speed and hands them over to ohci. No idea why.
  • I2C ports work now, including the onboard RTC. You have to hook up your own battery though.
  • we're no longer limited to 256MB, all RAM is usable now.
  • onboard ethernet is supported by the dme driver.
There's also an unfinished driver for the SD/MMC ports.
The RTC is a bit funny - according to the manual there's a Pericom RTC on iic4 addr 0x68 - not on my preproduction board. I've got something that looks like a PCF8563 at addr 0x51, and so do the production boards that I know of. Some pins on one of the expansion connectors seem to be for a battery but I haven't been able to confirm that yet. Either way, since the main connector is supposed to be Raspberry Pi compatible any RTC module for the RPi should Just Work(tm), with the appropriate line added to the kernel config.
Some more work has been done under the hood, like some preparations for SMP support.

Here's the obligatory boot transcript, complete with incomplete drivers and debug spam:

U-Boot SPL 2013.10-rc3-g9329ab16a204 (Jun 26 2014 - 09:43:22)
SDRAM H5TQ2G83CFR initialization... done


U-Boot 2013.10-rc3-g9329ab16a204 (Jun 26 2014 - 09:43:22)

Board: ci20 (Ingenic XBurst JZ4780 SoC)
DRAM:  1 GiB
NAND:  8192 MiB
MMC:   jz_mmc msc1: 0
In:    eserial3
Out:   eserial3
Err:   eserial3
Net:   dm9000
ci20# dhcp
ERROR: resetting DM9000 -> not responding
dm9000 i/o: 0xb6000000, id: 0x90000a46 
DM9000: running in 8 bit mode
MAC: d0:31:10:ff:7e:89
operating at 100M full duplex mode
BOOTP broadcast 1
DHCP client bound to address 192.168.0.47
*** Warning: no boot file name; using 'C0A8002F.img'
Using dm9000 device
TFTP from server 192.168.0.44; our IP address is 192.168.0.47
Filename 'C0A8002F.img'.
Load address: 0x88000000
Loading: #################################################################
	 ##############################################
	 369.1 KiB/s
done
Bytes transferred = 1621771 (18bf0b hex)
ci20# bootm
## Booting kernel from Legacy Image at 88000000 ...
   Image Name:   evbmips 7.99.18 (CI20)
   Image Type:   MIPS NetBSD Kernel Image (gzip compressed)
   Data Size:    1621707 Bytes = 1.5 MiB
   Load Address: 80020000
   Entry Point:  80020000
   Verifying Checksum ... OK
   Uncompressing Kernel Image ... OK
subcommand not supported
ci20# g 80020000
## Starting application at 0x80020000 ...
pmap_steal_memory: seg 0: 0x6bf 0x6bf 0xffff 0xffff
Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
    2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015
    The NetBSD Foundation, Inc.  All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
    The Regents of the University of California.  All rights reserved.

NetBSD 7.99.18 (CI20) #20: Thu Jun 11 13:06:41 EDT 2015
	ml@blackbush:/home/build/obj_evbmips32/sys/arch/evbmips/compile/CI20
Ingenic XBurst
total memory = 1024 MB
avail memory = 997 MB
mainbus0 (root)
cpu0 at mainbus0: 1200.00MHz (hz cycles = 120000, delay divisor = 12)
cpu0: Ingenic XBurst (0x3ee1024f) Rev. 79 with unknown FPC type (0x330000) Rev. 0
cpu0: 32 TLB entries, 16MB max page size
cpu0: 32KB/32B 8-way set-associative L1 instruction cache
cpu0: 32KB/32B 8-way set-associative write-back L1 data cache
com0 at mainbus0: Ingenic UART, working fifo
com0: console

apbus0 at mainbus0
JZ_CLKGR0 3f587fe0
JZ_CLKGR1 000073e0
JZ_SPCR0  00000000
JZ_SPCR1  00000000
JZ_SRBC   00000002
JZ_OPCR   000015e6
JZ_UHCCDR c0000000
dwctwo0 at apbus0 addr 0x13500000 irq 21: USB OTG controller
ohci0 at apbus0 addr 0x134a0000 irq 5: OHCI USB controller
ohci0: OHCI version 1.0
usb0 at ohci0: USB revision 1.0
ehci0 at apbus0 addr 0x13490000 irq 20: EHCI USB controller
48352 46418
UHCCDR: c0000000
UHCCDR: 60000017
caplength 10
ehci0: companion controller, 1 port each: ohci0
usb1 at ehci0: USB revision 2.0
dme0 at apbus0 addr 0x16000000: DM9000 Ethernet controller
jzgpio at apbus0 addr 0x10010000 not configured
jzgpio at apbus0 addr 0x10010100 not configured
jzgpio at apbus0 addr 0x10010200 not configured
jzgpio at apbus0 addr 0x10010300 not configured
jzgpio at apbus0 addr 0x10010400 not configured
jzgpio at apbus0 addr 0x10010500 not configured
jziic0 at apbus0 addr 0x10050000 irq 60: SMBus controller
iic0 at jziic0: I2C bus
jziic1 at apbus0 addr 0x10051000 irq 59: SMBus controller
iic1 at jziic1: I2C bus
jziic2 at apbus0 addr 0x10052000 irq 58: SMBus controller
iic2 at jziic2: I2C bus
jziic3 at apbus0 addr 0x10053000 irq 57: SMBus controller
iic3 at jziic3: I2C bus
jziic4 at apbus0 addr 0x10054000 irq 56: SMBus controller
iic4 at jziic4: I2C bus
pcf8563rtc0 at iic4 addr 0x51: NXP PCF8563 Real-time Clock
jzmmc0 at apbus0 addr 0x13450000 irq 37: SD/MMC controller
25227 24176
jzmmc0: going to use 25227 kHz
MSC*CDR: 80000000
sdmmc0 at jzmmc0
jzmmc1 at apbus0 addr 0x13460000 irq 36: SD/MMC controller
25227 24176
jzmmc1: going to use 25227 kHz
MSC*CDR: 20000018
sdmmc1 at jzmmc1
jzmmc2 at apbus0 addr 0x13470000 irq 35: SD/MMC controller
25227 24176
jzmmc2: going to use 25227 kHz
MSC*CDR: 00000000
sdmmc2 at jzmmc2
jzfb at apbus0 addr 0x13050000 not configured
JZ_CLKGR0 2c586780
JZ_CLKGR1 000060e0
usb2 at dwctwo0: USB revision 2.0
starting timer interrupt...
jzmmc_bus_clock: 400
sh: 6 freq: 394
sdmmc0: couldn't identify card
sdmmc0: no functions
jzmmc_bus_clock: 0
sh: 7 freq: 197
jzmmc_bus_clock: 400
sh: 6 freq: 394
sdmmc1: couldn't identify card
sdmmc1: no functions
jzmmc_bus_clock: 0
sh: 7 freq: 197
jzmmc_bus_clock: 400
sh: 6 freq: 394
sdmmc2: couldn't identify card
sdmmc2: no functions
jzmmc_bus_clock: 0
sh: 7 freq: 197
uhub0 at usb0: Ingenic OHCI root hub, class 9/0, rev 1.00/1.00, addr 1
uhub1 at usb1: Ingenic EHCI root hub, class 9/0, rev 2.00/1.00, addr 1
uhub2 at usb2: Ingenic DWC2 root hub, class 9/0, rev 2.00/1.00, addr 1
PS_LS(1): 00001801
port1: 00001801
port2: 00000000
ehci0: handing over full speed device on port 1 to ohci0
umass0 at uhub2 port 1 configuration 1 interface 0
umass0: Generic Mass Storage Device, rev 2.00/1.05, addr 2
scsibus0 at umass0: 2 targets, 1 lun per target
sd0 at scsibus0 target 0 lun 0:  disk removable
sd0: fabricating a geometry
sd0: 15193 MB, 15193 cyl, 64 head, 32 sec, 512 bytes/sect x 31116288 sectors
root on sd0a dumps on sd0b
sd0: fabricating a geometry
/: replaying log to memory
kern.module.path=/stand/evbmips/7.99.18/modules
pid 1(init): ABI set to O32 (e_flags=0x70001007)
Thu Jun 11 07:15:08 GMT 2015
uhub3 at uhub0 port 1: Terminus Technology USB 2.0 Hub [MTT], class 9/0, rev 2.00/1.00, addr 2
Starting root file system check:
/dev/rsd0a: file system is journaled; not checking
/: replaying log to disk
umass1 at uhub3 port 5 configuration 1 interface 0
umass1: LaCie P'9220 Mobile Drive, rev 2.10/0.06, addr 3
scsibus1 at umass1: 2 targets, 1 lun per target
sd1 at scsibus1 target 0 lun 0:  disk fixed
sd1: 465 GB, 16383 cyl, 16 head, 63 sec, 512 bytes/sect x 976773168 sectors
swapctl: adding /dev/sd1b as swap device at priority 0
Starting file system checks:
/dev/rsd0e: file system is journaled; not checking
/dev/rsd0g: file system is journaled; not checking
/dev/rsd0f: 1 files, 249856 free (31232 clusters)
/dev/rsd1e: file system is journaled; not checking
random_seed: /var/db/entropy-file: Not present
Setting tty flags.
Setting sysctl variables:
ddb.onpanic: 1 -> 0
Starting network.
Hostname: ci20
IPv6 mode: autoconfigured host
Configuring network interfaces: dme0.
Adding interface aliases:.
add net default: gateway 192.168.0.1
Waiting for DAD to complete for statically configured addresses...
/usr: replaying log to disk
Building databases: dev, dev, utmp, utmpx.
Starting syslogd.
Starting rpcbind.
Mounting all file systems...
/stuff: replaying log to disk
/home: replaying log to disk
Clearing temporary files.
Updating fontconfig cache: done.
Checking quotas: done.
/etc/rc: WARNING: /etc/exports is not readable.
/etc/rc.d/mountd exited with code 1
Setting securelevel: kern.securelevel: 0 -> 1
Starting virecover.
Checking for core dump...
savecore: no core dump
Starting local daemons:.
Updating motd.
Starting ntpd.
Starting sshd.
Starting mdnsd.
Jun 11 03:23:01 ci20 mdnsd: mDNSResponder (Engineering Build) starting
Starting inetd.
Starting cron.
The following components reported failures:
    /etc/rc.d/mountd
See /var/run/rc.log for more information.
Thu Jun 11 03:23:02 EDT 2015

NetBSD/evbmips (ci20) (console)

login:

          NetBSD ported to Hardkernel ODROID-C1        

The Hardkernel ODROID-C1 is a quad-core ARMv7 development board that features an Amlogic S805 SoC (quad-core Cortex-A5 @ 1.5GHz), 1GB RAM and gigabit ethernet for $35 USD.

The ODROID-C1 is the first Cortex-A5 board supported by NetBSD. Matt Thomas (matt@) added initial Cortex-A5 support to the tree, and based on his work I added support for the Amlogic S805 SoC.

NetBSD -current (and soon 7.0) includes support for this board with the ODROID-C1 kernel. The following hardware is supported:

  • Cortex-A5 (multiprocessor)
  • CPU frequency scaling
  • L2 cache controller
  • Interrupt controller
  • Cortex-A5 global timer
  • Cortex-A5 watchdog
  • UART console
  • USB OTG controller
  • Gigabit ethernet
  • SD card slot
  • Hardware random number generator

More information on the NetBSD/evbarm on Hardkernel ODROID-C1 wiki page.

Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
    2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015
    The NetBSD Foundation, Inc.  All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
    The Regents of the University of California.  All rights reserved.

NetBSD 7.99.5 (ODROID-C1) #350: Wed Mar 18 19:45:17 ADT 2015
        Jared@Jared-PC:/cygdrive/d/netbsd/src/sys/arch/evbarm/compile/obj/ODROID-C1
total memory = 1024 MB
avail memory = 1008 MB
sysctl_createv: sysctl_create(machine_arch) returned 17
mainbus0 (root)
cpu0 at mainbus0 core 0: 1512 MHz Cortex-A5 r0p1 (Cortex V7A core)
cpu0: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu0: 32KB/32B 2-way L1 VIPT Instruction cache
cpu0: 32KB/32B 4-way write-back-locking-C L1 PIPT Data cache
cpu0: 512KB/32B 8-way write-back L2 PIPT Unified cache
vfp0 at cpu0: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
cpu1 at mainbus0 core 1
cpu2 at mainbus0 core 2
cpu3 at mainbus0 core 3
armperiph0 at mainbus0
armgic0 at armperiph0: Generic Interrupt Controller, 256 sources (245 valid)
armgic0: 32 Priorities, 224 SPIs, 5 PPIs, 16 SGIs
a9tmr0 at armperiph0: A5 Global 64-bit Timer (378 MHz)
a9tmr0: interrupting on irq 27
a9wdt0 at armperiph0: A5 Watchdog Timer, default period is 12 seconds
arml2cc0 at armperiph0: ARM PL310 r3p3 L2 Cache Controller (disabled)
arml2cc0: cache enabled
amlogicio0 at mainbus0
amlogiccom0 at amlogicio0 port 0: console
amlogiccom0: interrupting at irq 122
amlogicrng0 at amlogicio0
dwctwo0 at amlogicio0 port 0: USB controller
dwctwo1 at amlogicio0 port 1: USB controller
awge0 at amlogicio0: Gigabit Ethernet Controller
awge0: interrupting on irq 40
awge0: Ethernet address: 00:1e:06:c3:7e:be
rgephy0 at awge0 phy 0: RTL8169S/8110S/8211 1000BASE-T media interface, rev. 6
rgephy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT-FDX, auto
rgephy1 at awge0 phy 1: RTL8169S/8110S/8211 1000BASE-T media interface, rev. 6
rgephy1: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, 1000baseT-FDX, auto
amlogicsdhc0 at amlogicio0 port 1: SDHC controller
amlogicsdhc0: interrupting on irq 110
usb0 at dwctwo0: USB revision 2.0
usb1 at dwctwo1: USB revision 2.0
cpu3: 1512 MHz Cortex-A5 r0p1 (Cortex V7A core)
cpu3: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu3: 32KB/32B 2-way L1 VIPT Instruction cache
cpu3: 32KB/32B 4-way write-back-locking-C L1 PIPT Data cache
cpu3: 512KB/32B 8-way write-back L2 PIPT Unified cache
vfp3 at cpu3: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
cpu2: 1512 MHz Cortex-A5 r0p1 (Cortex V7A core)
cpu2: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu2: 32KB/32B 2-way L1 VIPT Instruction cache
cpu2: 32KB/32B 4-way write-back-locking-C L1 PIPT Data cache
cpu2: 512KB/32B 8-way write-back L2 PIPT Unified cache
vfp2 at cpu2: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
cpu1: 1512 MHz Cortex-A5 r0p1 (Cortex V7A core)
cpu1: DC enabled IC enabled WB disabled EABT branch prediction enabled
cpu1: 32KB/32B 2-way L1 VIPT Instruction cache
cpu1: 32KB/32B 4-way write-back-locking-C L1 PIPT Data cache
cpu1: 512KB/32B 8-way write-back L2 PIPT Unified cache
vfp1 at cpu1: NEON MPE (VFP 3.0+), rounding, NaN propagation, denormals
sdmmc0 at amlogicsdhc0
uhub0 at usb0: vendor 0000 DWC2 root hub, class 9/0, rev 2.00/1.00, addr 1
uhub1 at usb1: vendor 0000 DWC2 root hub, class 9/0, rev 2.00/1.00, addr 1
ld0 at sdmmc0: <0x03:0x5344:SU08G:0x80:0x1cda770b:0x0ca>
ld0: 7580 MB, 3850 cyl, 64 head, 63 sec, 512 bytes/sect x 15523840 sectors
ld0: 4-bit width, bus clock 50.000 MHz
uhub2 at uhub1 port 1: vendor 05e3 USB2.0 Hub, class 9/0, rev 2.00/32.98, addr 2
uhub2: multiple transaction translators
boot device: ld0
root on ld0f dumps on ld0b
root file system type: ffs
kern.module.path=/stand/evbarm/7.99.5/modules
WARNING: no TOD clock present
WARNING: using filesystem time
WARNING: CHECK AND RESET THE DATE!
Wed Mar 18 22:45:46 UTC 2015
Starting root file system check:
/dev/rld0f: file system is clean; not checking
Starting file system checks:
/dev/rld0e: 5 files, 52556 free (13139 clusters)
random_seed: /var/db/entropy-file: Not present
Setting tty flags.
Setting sysctl variables:
ddb.onpanic: 1 -> 0
Starting network.
Hostname: empusa
IPv6 mode: host
Configuring network interfaces:.
Adding interface aliases:.
Waiting for DAD completion for statically configured addresses...
Starting dhcpcd.
Building databases: dev, utmp, utmpx.
Starting syslogd.
Mounting all file systems...
Clearing temporary files.
Updating fontconfig cache: done.
Creating a.out runtime link editor directory cache.
Checking quotas: done.
Setting securelevel: kern.securelevel: 0 -> 1
Starting virecover.
Starting local daemons:.
Updating motd.
Starting ntpd.
Starting sshd.
Starting mdnsd.
Mar 18 22:46:07 empusa mdnsd: mDNSResponder (Engineering Build) starting
Starting inetd.
Starting cron.
Wed Mar 18 22:46:08 UTC 2015

NetBSD/evbarm (empusa) (console)

login:

          CI20 reaches userland        
My CI20 now makes it to userland, with root and ethernet via USB. Here's the transcript:
U-Boot SPL 2013.10-rc3-g9329ab16a204 (Jun 26 2014 - 09:43:22)
SDRAM H5TQ2G83CFR initialization... done


U-Boot 2013.10-rc3-g9329ab16a204 (Jun 26 2014 - 09:43:22)

Board: ci20 (Ingenic XBurst JZ4780 SoC)
DRAM:  1 GiB
NAND:  8192 MiB
MMC:   jz_mmc msc1: 0
In:    eserial3
Out:   eserial3
Err:   eserial3
Net:   dm9000
ci20# dhcp 
ERROR: resetting DM9000 -> not responding
dm9000 i/o: 0xb6000000, id: 0x90000a46 
DM9000: running in 8 bit mode
MAC: d0:31:10:ff:7e:89
operating at 100M full duplex mode
BOOTP broadcast 1
DHCP client bound to address 192.168.0.47
*** Warning: no boot file name; using 'C0A8002F.img'
Using dm9000 device
TFTP from server 192.168.0.44; our IP address is 192.168.0.47
Filename 'C0A8002F.img'.
Load address: 0x88000000
Loading: #################################################################
	 #######################################
	 347.7 KiB/s
done
Bytes transferred = 1519445 (172f55 hex)
ci20# bootm
## Booting kernel from Legacy Image at 88000000 ...
   Image Name:   evbmips 7.99.5 (CI20)
   Image Type:   MIPS NetBSD Kernel Image (gzip compressed)
   Data Size:    1519381 Bytes = 1.4 MiB
   Load Address: 80020000
   Entry Point:  80020000
   Verifying Checksum ... OK
   Uncompressing Kernel Image ... OK
subcommand not supported
ci20# g 80020000
## Starting applicatipmap_steal_memory: seg 0: 0x3b3 0x3b3 0xffff 0xffff
Loaded initial symtab at 0x80304754, strtab at 0x8032d934, # entries 10499
Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
    2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015
    The NetBSD Foundation, Inc.  All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
    The Regents of the University of California.  All rights reserved.

NetBSD 7.99.5 (CI20) #170: Sat Mar  7 10:43:03 EST 2015
	ml@blackbush:/home/build/obj_evbmips32/sys/arch/evbmips/compile/CI20
Ingenic XBurst
total memory = 256 MB
avail memory = 247 MB
mainbus0 (root)
cpu0 at mainbus0: 1200.00MHz (hz cycles = 120000, delay divisor = 12)
cpu0: Ingenic XBurst (0x3ee1024f) Rev. 79 with unknown FPC type (0x330000) Rev. 0
cpu0: 32 TLB entries, 16MB max page size
cpu0: 32KB/32B 8-way set-associative L1 instruction cache
cpu0: 32KB/32B 8-way set-associative write-back L1 data cache
com0 at mainbus0: Ingenic UART, working fifo
com0: console

apbus0 at mainbus0
dwctwo0 at apbus0: USB controller
jzgpio at apbus0 not configured
jzfb at apbus0 not configured
usb0 at dwctwo0: USB revision 2.0
starting timer interrupt...
uhub0 at usb0: vendor 0000 DWC2 root hub, class 9/0, rev 2.00/1.00, addr 1
uhub1 at uhub0 port 1: vendor 1a40 USB 2.0 Hub [MTT], class 9/0, rev 2.00/1.00, addr 2
uhub1: multiple transaction translators
umass0 at uhub1 port 1 configuration 1 interface 0
umass0: LaCie P'9220 Mobile Drive, rev 2.10/0.06, addr 3
scsibus0 at umass0: 2 targets, 1 lun per target
sd0 at scsibus0 target 0 lun 0:  disk fixed
sd0: 465 GB, 16383 cyl, 16 head, 63 sec, 512 bytes/sect x 976773168 sectors
umass1 at uhub1 port 2 configuration 1 interface 0
umass1: Apple Inc. iPod, rev 2.00/0.01, addr 4
scsibus1 at umass1: 2 targets, 1 lun per target
sd1 at scsibus1 target 0 lun 0:  disk removable
uhidev0 at uhub1 port 4 configuration 1 interface 0
uhidev0: vendor 04d9 VISENTA V1, rev 1.10/1.00, addr 5, iclass 3/1
ukbd0 at uhidev0: 8 modifier keys, 6 key codes
wskbd0 at ukbd0 (mux ignored)
uhidev1 at uhub1 port 4 configuration 1 interface 1
uhidev1: vendor 04d9 VISENTA V1, rev 1.10/1.00, addr 5, iclass 3/1
uhidev1: 3 report ids
ums0 at uhidev1 reportid 1: 3 buttons, W and Z dirs
wsmouse0 at ums0 (mux ignored)
uhid0 at uhidev1 reportid 2: input=2, output=0, feature=0
uhid1 at uhidev1 reportid 3: input=1, output=0, feature=0
sd1: fabricating a geometry
sd1: 7601 MB, 950 cyl, 64 head, 32 sec, 4096 bytes/sect x 1946049 sectors
uhub2 at uhub1 port 6: vendor 03eb Standard USB Hub, class 9/0, rev 1.10/3.00, addr 6
axe0 at uhub2 port 1
axe0: D-LINK CORPORAION DUB-E100, rev 2.00/10.01, addr 7
axe0: Ethernet address 00:80:c8:37:00:e1
ukphy0 at axe0 phy 3: OUI 0x0009c3, model 0x0005, rev. 4
ukphy0: 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX, auto
root on sd0a dumps on sd0b
kern.module.path=/stand/evbmips/7.99.5/modules
WARNING: no TOD clock present
WARNING: using filesystem time
WARNING: CHECK AND RESET THE DATE!
init: copying out path `/sbin/init' 11
pid 1(init): ABI set to O32 (e_flags=0x70001007)
Thu Mar  5 18:27:33 UTC 2015
Not checking /: fs_passno = 0 in /etc/fstab
swapctl: adding /dev/sd0b as swap device at priority 0
Starting file system checks:
random_seed: /var/db/entropy-file: Not present
Setting tty flags.
Setting sysctl variables:
ddb.onpanic: 1 -> 0
Starting network.
Hostname: ci20
IPv6 mode: autoconfigured host
Configuring network interfaces: axe0.
Adding interface aliases:.
add net default: gateway 192.168.0.1
Waiting for DAD to complete for statically configured addresses...
axe0: link state UP (was UNKNOWN)
Building databases: dev, utmp, utmpx.
Starting syslogd.
Starting rpcbind.
Mounting all file systems...
Clearing temporary files.
Checking quotas: done.
Setting securelevel: kern.securelevel: 0 -> 1
Starting virecover.
Checking for core dump...
savecore: no core dump
Starting local daemons:.
Updating motd.
Starting sshd.
Starting inetd.
Starting cron.
Thu Mar  5 18:27:55 UTC 2015

NetBSD/evbmips (ci20) (console)

login: 

          The State of Accelerated Graphics on NetBSD/sparc, updated        
This is an update to this post.
Some new drivers were added ( cgtwelve, mgx ), others got improvements ( SX acceleration, support for 8bit tcx ).
  • Sun CG3 - has kernel support and works in X with the wsfb driver, the hardware doesn't support a hardware cursor or any kind of acceleration so we won't bother with a dedicated X driver. The hardware supports 8 bit colour only.
  • Sun CG6 family, including GX, TGX, XGX and their plus variants - supported with acceleration in both the kernel and X with the suncg6 driver. Hardware cursor is supported, the hardware supports 8 bit colour only.
  • Sun ZX/Leo - has accelerated kernel support but no X yet. The sunleo driver from Xorg should work without changes but doesn't support any kind of acceleration yet. The console runs in 8 bit, X will support 24 bit.
  • Sun BW2 - has kernel support, should work with the wsfb driver in X.The board doesn't support a hardware cursor or any kind of acceleration. Hardware is monochrome only.
  • Weitek P9100 - found in Tadpole SPARCbook 3 series laptops, supported with acceleration in both the kernel and X with the pnozz driver. Hardware cursor is supported. The console runs in 8 bit, X can run in 8, 16 or 24 bit colour.
  • Sun S24/TCX - supported with acceleration in both the kernel and X with the suntcx driver. A hardware cursor is supported ( only on S24, the 8bit TCX's DAC doesn't support it ).The console runs in 8 bit, X in 8 or 24 bit.
  • Sun CG14 - supported with acceleration ( using the new sx driver ) and hardware cursor in both the kernel and X. The console runs in 8 bit, X in 24 bit. The X driver supports some xrender acceleration as well.
  • Fujitsu AG-10e - supported with acceleration in both the kernel and X, a hardware cursor is supported. The console runs in 8 bit, X in 24 bit.
  • IGS 1682 found in JavaStation 10 / Krups - supported, but the chip lacks any acceleration features. It does support a hardware cursor though which the wsfb driver can use. Currently X is limited to 8 bit colour alhough the hardware supports up to 24bit.
  • Sun CG12 / Matrox SG3 - supported without acceleration in both the kernel and X. The console runs in monochrome or 8 bit, X in 24 bit.
  • Southland Media Systems (now Quantum 3D) MGX - supported with acceleration as console, X is currently limited to wsfb in 8 bit. No hardware cursor support in the driver yet .

All boards with dedicated drivers will work as primary or secondary heads in X, boards which use wsfb will only work in X when they are the system console. For example, you can run an SS20 with a cg14 as console, an AG-10e and two CG6 with four heads.

There is also a generic kernel driver ( genfb at sbus ) which may or may not work with graphics hardware not listed here, depending on the board's firmware. If it provides standard properties for width, height, colour depth, stride and framebuffer address it should work but not all boards do this. For example, the ZX doesn't give a framebuffer address and there is no reason to assume it's the only one. Also, there is no standard way to program palette registers via firmware, so even if genfb works colours are likely off. X should work with the wsfb driver, it will likely look a bit odd though.

Boards like the CG8 have older, pre-wscons kernel support and weren't converted due to lack of hardware. They seem to be pretty rare though, in all the years I've been using NetBSD/sparc I have not seen a single user ask about them.

Finally, 3rd party boards not mentioned here are unsupported for lack of hardware in the right hands.
Graphics hardware supported by NetBSD/sparc64 which isn't listed here should work the same way when running a 32bit userland but this is mostly untested.


          So they sent me a CI20        
When I found out that Ingenic is giving away some of their MIPS Creator CI20 boards I applied, and to my surprise they sent me one. Of course, the point was to make NetBSD work on it. I just finished the first step.

That is, make it load a kernel, identify / setup the CPU, attach a serial console. This is what it looks like:

U-Boot SPL 2013.10-rc3-g9329ab16a204 (Jun 26 2014 - 09:43:22)
SDRAM H5TQ2G83CFR initialization... done


U-Boot 2013.10-rc3-g9329ab16a204 (Jun 26 2014 - 09:43:22)

Board: ci20 (Ingenic XBurst JZ4780 SoC)
DRAM: 1 GiB
NAND: 8192 MiB
MMC: jz_mmc msc1: 0
In: eserial3
Out: eserial3
Err: eserial3
Net: dm9000
ci20# dhcp
ERROR: resetting DM9000 -> not responding
dm9000 i/o: 0xb6000000, id: 0x90000a46
DM9000: running in 8 bit mode
MAC: d0:31:10:ff:7e:89
operating at 100M full duplex mode
BOOTP broadcast 1
DHCP client bound to address 192.168.0.47
*** Warning: no boot file name; using 'C0A8002F.img'
Using dm9000 device
TFTP from server 192.168.0.44; our IP address is 192.168.0.47
Filename 'C0A8002F.img'.
Load address: 0x88000000
Loading: #################################################################
##############
284.2 KiB/s
done
Bytes transferred = 1146945 (118041 hex)
ci20# bootm
## Booting kernel from Legacy Image at 88000000 ...
Image Name: evbmips 7.99.1 (CI20)
Image Type: MIPS NetBSD Kernel Image (gzip compressed)
Data Size: 1146881 Bytes = 1.1 MiB
Load Address: 80020000
Entry Point: 80020000
Verifying Checksum ... OK
Uncompressing Kernel Image ... OK
subcommand not supported
ci20# g 80020000
## Starting applicatpmap_steal_memory: seg 0: 0x30c 0x30c 0xffff 0xffff
Loaded initial symtab at 0x802502d4, strtab at 0x80270cb4, # entries 8323
Copyright (c) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005,
2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014
The NetBSD Foundation, Inc. All rights reserved.
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California. All rights reserved.

NetBSD 7.99.1 (CI20) #113: Sat Nov 22 09:58:39 EST 2014
ml@blackbush:/home/build/obj_evbmips32/sys/arch/evbmips/compile/CI20
Ingenic XBurst
total memory = 1024 MB
avail memory = 1001 MB
kern.module.path=/stand/evbmips/7.99.1/modules
mainbus0 (root)
cpu0 at mainbus0: 1200.00MHz (hz cycles = 120000, delay divisor = 12)
cpu0: Ingenic XBurst (0x3ee1024f) Rev. 79 with unknown FPC type (0x330000) Rev. 0
cpu0: 32 TLB entries, 16MB max page size
cpu0: 32KB/32B 8-way set-associative L1 instruction cache
cpu0: 32KB/32B 8-way set-associative write-back L1 data cache
com0 at mainbus0: Ingenic UART, working fifo
com0: console

root device:

What works:

  • CPU identification and setup
  • serial console via UART0
  • reset ( by provoking a watchdog timeout )
  • basic timers - enough for delay(), since the CPUs don't have MIPS cycle counters
  • dropping into ddb and poking around

What doesn't work (yet):

  • interrupts
  • everything else

Biggest obstacle - believe it or not, the serial port. The on-chip UARTs are mostly 16550 compatible. Mostly. The difference is one bit in the FIFO control register which, if not set, powers down the UART. So throwing data at the UART by hand worked but as soon as the com driver took over the line went dead. It took me a while to find that one.

          ARM multiprocessor support        
Those following the source-changes mailing list closely may have noticed several evbarm kernels getting "options MULTIPROCESSOR" in the last few days. This is due to those configurations now running properly in SMP mode, thanks to work mostly done by Matt Thomas and Nick Hudson.

The list of supported multiprocessor boards currently is:

  • Banana Pi (BPI)
  • Cubieboard 2 (CUBIEBOARD)
  • Cubietruck (CUBIETRUCK)
  • Merrii Hummingbird A31 (HUMMINGBIRD_A31)
  • CUBOX-I
  • NITROGEN6X

Details how to create bootable media and various other information for the Allwinner boards can be found on the NetBSD/evbarm on Allwinner Technology SoCs wiki page.

The release engineering team is discussing how to bring all those changes into the netbsd-7 branch as well, so that we can call NetBSD 7.0 "the ARM SoC release".

While multicore ARM chips are mostly known for being used in cell phones and tablet devices, there are also some nice "tiny PC" variants out there, like the CubieTruck, which originally comes with a small transparent case that allows piggybacking it onto a 2.5" hard disk:


Image from cubieboard.org under creative commons license.

This is nice to put next to your display, but a bit too tiny and fragile for my test lab - so I reused an old (originally mac68k cartridge) SCSI enclosure for mine:


Image by myself under creative commons license.

This machine is used to run regular tests for big endian (!) arm, the results are gathered here. Running it big-endian is just a way to trigger more bugs.

The last test run logged on the page is already done with an SMP kernel. No regressions were found so far, and the other bugs (sligtly more than 30 failures in the test run is way too much) will be addressed one by one.

Now happy multi-ARM-ing everyone, and I am looking forward to a great NetBSD 7.0 release!


          The playstation2 port is back        
In 2009 the playstation2 port was removed from the NetBSD sources, since it had not been compilable for months and no modern enough compiler was available.

Due to a strange series of events the code changes needed to support the (slightly unusual) MIPS CPU used in the playstation2 had never been merged into gcc nor binutils mainline. Only recently this has been fixed. Unfortunately the changes have not been pulled up to the gcc 4.8.3 branch (which is available in NetBSD-current), so an external toolchain from pkgsrc is needed for the playstation2.

To install this toolchain, use a pkgsrc-current checkout and cd to cross/gcc-mips-current, then do "make install" - that is all.

Work is in progress to bring the old code up to -current. Hopefully a bootable NetBSD-current kernel will be available soon.


          SX support added        
Support for Sun's SX rendering engine ( found in the SparcStation 20 and 10SX's memory controllers ) has been added, both for the console and X. Both drivers support basic acceleration ( block copy, rectangle fill, character drawing in the kernel ), the Xorg driver also supports Xrender acceleration. This probably makes SX the oldest supported hardware which can do that.

SX is more or less a vector processor built into a memory controller. The 'or less' part comes from the fact that it can't actually read instructions from memory - the CPU has to feed them one by one. This isn't quite as bad as it sounds - SX has plenty of registers ( 128 - eight of them have special functions, the rest is free for all ) and every instruction takes a count to operate on several subsequent registers or memory locations ( ALU ops can use up to 16, memory accesses up to 32 ). SX supports some parallelism too - the ALUs can do up to two 16bit multiplications and two other arithmetic or logical ops per clock cycle ( 32bit multiplications use both ALUs ). The thing run